Feb 12 20:24:22.794601 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:24:22.794628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:22.794643 kernel: BIOS-provided physical RAM map: Feb 12 20:24:22.794663 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:24:22.794672 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:24:22.794677 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:24:22.794684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:24:22.794689 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:24:22.794696 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:24:22.794702 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:24:22.794707 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:24:22.794712 kernel: NX (Execute Disable) protection: active Feb 12 20:24:22.794718 kernel: SMBIOS 2.8 present. Feb 12 20:24:22.794723 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:24:22.794732 kernel: Hypervisor detected: KVM Feb 12 20:24:22.794739 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:24:22.794746 kernel: kvm-clock: cpu 0, msr 88faa001, primary cpu clock Feb 12 20:24:22.794753 kernel: kvm-clock: using sched offset of 2163799799 cycles Feb 12 20:24:22.794760 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:24:22.794766 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:24:22.794772 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:24:22.794778 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:24:22.794785 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:24:22.794792 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:24:22.794798 kernel: Using GB pages for direct mapping Feb 12 20:24:22.794804 kernel: ACPI: Early table checksum verification disabled Feb 12 20:24:22.794810 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:24:22.794824 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.794837 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.794843 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.794849 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:24:22.794855 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.795396 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.795403 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:24:22.795409 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:24:22.795415 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:24:22.795421 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:24:22.795427 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:24:22.795434 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:24:22.795442 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:24:22.795455 kernel: No NUMA configuration found Feb 12 20:24:22.795462 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:24:22.795476 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:24:22.795483 kernel: Zone ranges: Feb 12 20:24:22.795489 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:24:22.795496 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:24:22.795503 kernel: Normal empty Feb 12 20:24:22.795510 kernel: Movable zone start for each node Feb 12 20:24:22.795516 kernel: Early memory node ranges Feb 12 20:24:22.795522 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:24:22.795530 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:24:22.795538 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:24:22.795546 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:24:22.795553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:24:22.795559 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:24:22.795567 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:24:22.795573 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:24:22.795580 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:24:22.795586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:24:22.795593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:24:22.795599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:24:22.795605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:24:22.795612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:24:22.795618 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:24:22.795625 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:24:22.795631 kernel: TSC deadline timer available Feb 12 20:24:22.795638 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:24:22.795644 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:24:22.795650 kernel: kvm-guest: setup PV sched yield Feb 12 20:24:22.795657 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:24:22.795663 kernel: Booting paravirtualized kernel on KVM Feb 12 20:24:22.795670 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:24:22.795676 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:24:22.795682 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:24:22.795690 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:24:22.795696 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:24:22.795704 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:24:22.795712 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:24:22.795718 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:24:22.795725 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:24:22.795731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:24:22.795737 kernel: Policy zone: DMA32 Feb 12 20:24:22.795745 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:22.795753 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:24:22.795760 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:24:22.795766 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:24:22.795773 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:24:22.795780 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:24:22.795788 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:24:22.795797 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:24:22.795804 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:24:22.795811 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:24:22.795818 kernel: rcu: RCU event tracing is enabled. Feb 12 20:24:22.795825 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:24:22.795832 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:24:22.795838 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:24:22.795844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:24:22.795851 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:24:22.795857 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:24:22.795863 kernel: random: crng init done Feb 12 20:24:22.795871 kernel: Console: colour VGA+ 80x25 Feb 12 20:24:22.795877 kernel: printk: console [ttyS0] enabled Feb 12 20:24:22.795883 kernel: ACPI: Core revision 20210730 Feb 12 20:24:22.795890 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:24:22.795896 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:24:22.795903 kernel: x2apic enabled Feb 12 20:24:22.795909 kernel: Switched APIC routing to physical x2apic. Feb 12 20:24:22.795915 kernel: kvm-guest: setup PV IPIs Feb 12 20:24:22.795922 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:24:22.795929 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:24:22.795936 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:24:22.795942 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:24:22.795949 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:24:22.795955 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:24:22.795962 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:24:22.795970 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:24:22.795978 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:24:22.795985 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:24:22.795997 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:24:22.796004 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:24:22.796011 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:24:22.796019 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:24:22.796046 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:24:22.796064 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:24:22.796075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:24:22.796082 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:24:22.796089 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:24:22.796098 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:24:22.796104 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:24:22.796115 kernel: LSM: Security Framework initializing Feb 12 20:24:22.796121 kernel: SELinux: Initializing. Feb 12 20:24:22.796128 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:22.796135 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:24:22.796142 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:24:22.796150 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:24:22.796157 kernel: ... version: 0 Feb 12 20:24:22.796164 kernel: ... bit width: 48 Feb 12 20:24:22.796170 kernel: ... generic registers: 6 Feb 12 20:24:22.796177 kernel: ... value mask: 0000ffffffffffff Feb 12 20:24:22.796186 kernel: ... max period: 00007fffffffffff Feb 12 20:24:22.796195 kernel: ... fixed-purpose events: 0 Feb 12 20:24:22.796201 kernel: ... event mask: 000000000000003f Feb 12 20:24:22.796208 kernel: signal: max sigframe size: 1776 Feb 12 20:24:22.796215 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:24:22.796223 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:24:22.796230 kernel: x86: Booting SMP configuration: Feb 12 20:24:22.796236 kernel: .... node #0, CPUs: #1 Feb 12 20:24:22.796243 kernel: kvm-clock: cpu 1, msr 88faa041, secondary cpu clock Feb 12 20:24:22.796250 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:24:22.796256 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:24:22.796264 kernel: #2 Feb 12 20:24:22.796272 kernel: kvm-clock: cpu 2, msr 88faa081, secondary cpu clock Feb 12 20:24:22.796292 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:24:22.796301 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:24:22.796307 kernel: #3 Feb 12 20:24:22.796314 kernel: kvm-clock: cpu 3, msr 88faa0c1, secondary cpu clock Feb 12 20:24:22.796320 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:24:22.796327 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:24:22.796334 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:24:22.796340 kernel: smpboot: Max logical packages: 1 Feb 12 20:24:22.796347 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:24:22.796354 kernel: devtmpfs: initialized Feb 12 20:24:22.796362 kernel: x86/mm: Memory block size: 128MB Feb 12 20:24:22.796369 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:24:22.796375 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:24:22.796382 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:24:22.796389 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:24:22.796395 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:24:22.796402 kernel: audit: type=2000 audit(1707769462.509:1): state=initialized audit_enabled=0 res=1 Feb 12 20:24:22.796409 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:24:22.796415 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:24:22.796425 kernel: cpuidle: using governor menu Feb 12 20:24:22.796434 kernel: ACPI: bus type PCI registered Feb 12 20:24:22.796440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:24:22.796447 kernel: dca service started, version 1.12.1 Feb 12 20:24:22.796454 kernel: PCI: Using configuration type 1 for base access Feb 12 20:24:22.796460 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:24:22.796473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:24:22.796480 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:24:22.796486 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:24:22.796494 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:24:22.796501 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:24:22.796508 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:24:22.796517 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:24:22.796526 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:24:22.796533 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:24:22.796540 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:24:22.796546 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:24:22.796553 kernel: ACPI: Interpreter enabled Feb 12 20:24:22.796561 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:24:22.796568 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:24:22.796575 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:24:22.796582 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:24:22.796588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:24:22.796706 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:24:22.796718 kernel: acpiphp: Slot [3] registered Feb 12 20:24:22.796725 kernel: acpiphp: Slot [4] registered Feb 12 20:24:22.796733 kernel: acpiphp: Slot [5] registered Feb 12 20:24:22.796740 kernel: acpiphp: Slot [6] registered Feb 12 20:24:22.796746 kernel: acpiphp: Slot [7] registered Feb 12 20:24:22.796753 kernel: acpiphp: Slot [8] registered Feb 12 20:24:22.796759 kernel: acpiphp: Slot [9] registered Feb 12 20:24:22.796767 kernel: acpiphp: Slot [10] registered Feb 12 20:24:22.796775 kernel: acpiphp: Slot [11] registered Feb 12 20:24:22.796784 kernel: acpiphp: Slot [12] registered Feb 12 20:24:22.796791 kernel: acpiphp: Slot [13] registered Feb 12 20:24:22.796797 kernel: acpiphp: Slot [14] registered Feb 12 20:24:22.796806 kernel: acpiphp: Slot [15] registered Feb 12 20:24:22.796813 kernel: acpiphp: Slot [16] registered Feb 12 20:24:22.796820 kernel: acpiphp: Slot [17] registered Feb 12 20:24:22.796828 kernel: acpiphp: Slot [18] registered Feb 12 20:24:22.796835 kernel: acpiphp: Slot [19] registered Feb 12 20:24:22.796843 kernel: acpiphp: Slot [20] registered Feb 12 20:24:22.796850 kernel: acpiphp: Slot [21] registered Feb 12 20:24:22.796857 kernel: acpiphp: Slot [22] registered Feb 12 20:24:22.796864 kernel: acpiphp: Slot [23] registered Feb 12 20:24:22.796871 kernel: acpiphp: Slot [24] registered Feb 12 20:24:22.796878 kernel: acpiphp: Slot [25] registered Feb 12 20:24:22.796885 kernel: acpiphp: Slot [26] registered Feb 12 20:24:22.796906 kernel: acpiphp: Slot [27] registered Feb 12 20:24:22.796916 kernel: acpiphp: Slot [28] registered Feb 12 20:24:22.796923 kernel: acpiphp: Slot [29] registered Feb 12 20:24:22.796930 kernel: acpiphp: Slot [30] registered Feb 12 20:24:22.796936 kernel: acpiphp: Slot [31] registered Feb 12 20:24:22.796947 kernel: PCI host bridge to bus 0000:00 Feb 12 20:24:22.797042 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:24:22.797111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:24:22.797175 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:24:22.797242 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:24:22.797330 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:24:22.797393 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:24:22.797489 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:24:22.797579 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:24:22.797663 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:24:22.797737 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:24:22.797810 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:24:22.797883 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:24:22.797952 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:24:22.798024 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:24:22.798109 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:24:22.798179 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:24:22.798250 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:24:22.798348 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:24:22.798421 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:24:22.798499 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:24:22.798578 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:24:22.798655 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:24:22.798734 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:24:22.798812 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:24:22.798892 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:24:22.798963 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:24:22.799037 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:24:22.799115 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:24:22.799192 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:24:22.799261 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:24:22.799357 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:24:22.799435 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:24:22.799514 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:24:22.799586 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:24:22.799664 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:24:22.799673 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:24:22.799680 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:24:22.799687 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:24:22.799695 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:24:22.799704 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:24:22.799711 kernel: iommu: Default domain type: Translated Feb 12 20:24:22.799718 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:24:22.799787 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:24:22.799867 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:24:22.799936 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:24:22.799946 kernel: vgaarb: loaded Feb 12 20:24:22.799952 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:24:22.799961 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:24:22.799970 kernel: PTP clock support registered Feb 12 20:24:22.799977 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:24:22.799983 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:24:22.799993 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:24:22.799999 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:24:22.800006 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:24:22.800013 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:24:22.800020 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:24:22.800026 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:24:22.800033 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:24:22.800040 kernel: pnp: PnP ACPI init Feb 12 20:24:22.800124 kernel: pnp 00:02: [dma 2] Feb 12 20:24:22.800137 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:24:22.800144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:24:22.800151 kernel: NET: Registered PF_INET protocol family Feb 12 20:24:22.800158 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:24:22.800165 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:24:22.800171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:24:22.800178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:24:22.800185 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:24:22.800193 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:24:22.800200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:22.800207 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:24:22.800214 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:24:22.800222 kernel: NET: Registered PF_XDP protocol family Feb 12 20:24:22.800301 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:24:22.800368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:24:22.800434 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:24:22.800510 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:24:22.800577 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:24:22.800655 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:24:22.800726 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:24:22.800801 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:24:22.800811 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:24:22.800818 kernel: Initialise system trusted keyrings Feb 12 20:24:22.800825 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:24:22.800833 kernel: Key type asymmetric registered Feb 12 20:24:22.800843 kernel: Asymmetric key parser 'x509' registered Feb 12 20:24:22.800851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:24:22.800859 kernel: io scheduler mq-deadline registered Feb 12 20:24:22.800866 kernel: io scheduler kyber registered Feb 12 20:24:22.800872 kernel: io scheduler bfq registered Feb 12 20:24:22.800879 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:24:22.800888 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:24:22.800897 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:24:22.800905 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:24:22.800913 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:24:22.800920 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:24:22.800927 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:24:22.800934 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:24:22.800941 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:24:22.801019 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:24:22.801029 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:24:22.801092 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:24:22.801164 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:24:22 UTC (1707769462) Feb 12 20:24:22.801229 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:24:22.801239 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:24:22.801246 kernel: Segment Routing with IPv6 Feb 12 20:24:22.801252 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:24:22.801259 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:24:22.801267 kernel: Key type dns_resolver registered Feb 12 20:24:22.801316 kernel: IPI shorthand broadcast: enabled Feb 12 20:24:22.801325 kernel: sched_clock: Marking stable (346160631, 70948902)->(443202201, -26092668) Feb 12 20:24:22.801335 kernel: registered taskstats version 1 Feb 12 20:24:22.801342 kernel: Loading compiled-in X.509 certificates Feb 12 20:24:22.801349 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:24:22.801355 kernel: Key type .fscrypt registered Feb 12 20:24:22.801362 kernel: Key type fscrypt-provisioning registered Feb 12 20:24:22.801369 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:24:22.801376 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:24:22.801382 kernel: ima: No architecture policies found Feb 12 20:24:22.801390 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:24:22.801397 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:24:22.801405 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:24:22.801413 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:24:22.801422 kernel: Run /init as init process Feb 12 20:24:22.801430 kernel: with arguments: Feb 12 20:24:22.801436 kernel: /init Feb 12 20:24:22.801443 kernel: with environment: Feb 12 20:24:22.801460 kernel: HOME=/ Feb 12 20:24:22.801476 kernel: TERM=linux Feb 12 20:24:22.801484 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:24:22.801493 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:22.801502 systemd[1]: Detected virtualization kvm. Feb 12 20:24:22.801509 systemd[1]: Detected architecture x86-64. Feb 12 20:24:22.801517 systemd[1]: Running in initrd. Feb 12 20:24:22.801524 systemd[1]: No hostname configured, using default hostname. Feb 12 20:24:22.801532 systemd[1]: Hostname set to . Feb 12 20:24:22.801543 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:22.801551 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:24:22.801558 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:22.801565 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:22.801573 systemd[1]: Reached target paths.target. Feb 12 20:24:22.801580 systemd[1]: Reached target slices.target. Feb 12 20:24:22.801587 systemd[1]: Reached target swap.target. Feb 12 20:24:22.801594 systemd[1]: Reached target timers.target. Feb 12 20:24:22.801603 systemd[1]: Listening on iscsid.socket. Feb 12 20:24:22.801611 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:24:22.801618 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:24:22.801626 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:24:22.801633 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:24:22.801641 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:22.801648 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:22.801655 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:22.801664 systemd[1]: Reached target sockets.target. Feb 12 20:24:22.801673 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:22.801682 systemd[1]: Finished network-cleanup.service. Feb 12 20:24:22.801692 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:24:22.801699 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:22.801707 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:22.801715 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:22.801723 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:24:22.801730 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:22.801743 systemd-journald[197]: Journal started Feb 12 20:24:22.801783 systemd-journald[197]: Runtime Journal (/run/log/journal/212be377c0da40b4bb58857894e4a86f) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:24:22.794870 systemd-modules-load[198]: Inserted module 'overlay' Feb 12 20:24:22.817336 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:24:22.817365 kernel: audit: type=1130 audit(1707769462.817:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.811761 systemd-resolved[199]: Positive Trust Anchors: Feb 12 20:24:22.822456 systemd[1]: Started systemd-journald.service. Feb 12 20:24:22.822484 kernel: Bridge firewalling registered Feb 12 20:24:22.822496 kernel: audit: type=1130 audit(1707769462.821:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.811771 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:22.827699 kernel: audit: type=1130 audit(1707769462.821:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.827717 kernel: audit: type=1130 audit(1707769462.827:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.811797 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:22.813867 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 12 20:24:22.820869 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 12 20:24:22.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.821637 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:22.837539 kernel: audit: type=1130 audit(1707769462.834:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.822392 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:24:22.838510 kernel: SCSI subsystem initialized Feb 12 20:24:22.833767 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:24:22.838545 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:22.840803 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:24:22.842383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:22.849228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:22.852804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:24:22.852824 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:24:22.852834 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:24:22.852843 kernel: audit: type=1130 audit(1707769462.849:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.855843 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 12 20:24:22.856392 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:22.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.857095 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:22.859294 kernel: audit: type=1130 audit(1707769462.856:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.860739 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:24:22.864214 kernel: audit: type=1130 audit(1707769462.860:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.861421 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:24:22.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.864645 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:22.868312 kernel: audit: type=1130 audit(1707769462.865:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.870887 dracut-cmdline[219]: dracut-dracut-053 Feb 12 20:24:22.872646 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:24:22.919304 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:24:22.929300 kernel: iscsi: registered transport (tcp) Feb 12 20:24:22.947368 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:24:22.947401 kernel: QLogic iSCSI HBA Driver Feb 12 20:24:22.967582 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:24:22.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:22.969009 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:24:23.012314 kernel: raid6: avx2x4 gen() 30908 MB/s Feb 12 20:24:23.029310 kernel: raid6: avx2x4 xor() 7600 MB/s Feb 12 20:24:23.046306 kernel: raid6: avx2x2 gen() 32233 MB/s Feb 12 20:24:23.063311 kernel: raid6: avx2x2 xor() 19119 MB/s Feb 12 20:24:23.080308 kernel: raid6: avx2x1 gen() 25646 MB/s Feb 12 20:24:23.097309 kernel: raid6: avx2x1 xor() 14390 MB/s Feb 12 20:24:23.114303 kernel: raid6: sse2x4 gen() 14844 MB/s Feb 12 20:24:23.131302 kernel: raid6: sse2x4 xor() 7231 MB/s Feb 12 20:24:23.148317 kernel: raid6: sse2x2 gen() 16429 MB/s Feb 12 20:24:23.165297 kernel: raid6: sse2x2 xor() 9875 MB/s Feb 12 20:24:23.182294 kernel: raid6: sse2x1 gen() 11953 MB/s Feb 12 20:24:23.199300 kernel: raid6: sse2x1 xor() 7818 MB/s Feb 12 20:24:23.199316 kernel: raid6: using algorithm avx2x2 gen() 32233 MB/s Feb 12 20:24:23.199324 kernel: raid6: .... xor() 19119 MB/s, rmw enabled Feb 12 20:24:23.200306 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:24:23.211298 kernel: xor: automatically using best checksumming function avx Feb 12 20:24:23.297317 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:24:23.304761 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:24:23.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:23.305000 audit: BPF prog-id=7 op=LOAD Feb 12 20:24:23.305000 audit: BPF prog-id=8 op=LOAD Feb 12 20:24:23.306423 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:23.317949 systemd-udevd[398]: Using default interface naming scheme 'v252'. Feb 12 20:24:23.321655 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:23.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:23.322803 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:24:23.332746 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 12 20:24:23.354441 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:24:23.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:23.355685 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:23.393566 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:23.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:23.424740 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:24:23.426763 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:24:23.427537 kernel: GPT:9289727 != 19775487 Feb 12 20:24:23.427565 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:24:23.428522 kernel: GPT:9289727 != 19775487 Feb 12 20:24:23.428546 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:24:23.429537 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:24:23.429568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:23.435305 kernel: libata version 3.00 loaded. Feb 12 20:24:23.446298 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:24:23.449306 kernel: scsi host0: ata_piix Feb 12 20:24:23.449503 kernel: scsi host1: ata_piix Feb 12 20:24:23.449591 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:24:23.449602 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:24:23.449610 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:24:23.449619 kernel: AES CTR mode by8 optimization enabled Feb 12 20:24:23.462325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:24:23.470713 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Feb 12 20:24:23.473013 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:24:23.473815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:24:23.484725 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:24:23.488694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:24:23.490260 systemd[1]: Starting disk-uuid.service... Feb 12 20:24:23.497785 disk-uuid[518]: Primary Header is updated. Feb 12 20:24:23.497785 disk-uuid[518]: Secondary Entries is updated. Feb 12 20:24:23.497785 disk-uuid[518]: Secondary Header is updated. Feb 12 20:24:23.500762 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:23.502303 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:23.505299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:23.608307 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:24:23.608380 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:24:23.640300 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:24:23.640488 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:24:23.657296 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:24:24.505203 disk-uuid[519]: The operation has completed successfully. Feb 12 20:24:24.506244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:24:24.528040 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:24:24.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.528118 systemd[1]: Finished disk-uuid.service. Feb 12 20:24:24.533466 systemd[1]: Starting verity-setup.service... Feb 12 20:24:24.545302 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:24:24.561589 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:24:24.563967 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:24:24.565761 systemd[1]: Finished verity-setup.service. Feb 12 20:24:24.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.619240 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:24:24.620252 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:24:24.620326 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:24:24.621798 systemd[1]: Starting ignition-setup.service... Feb 12 20:24:24.623306 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:24:24.630412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:24:24.630448 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:24:24.630461 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:24:24.637354 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:24:24.644355 systemd[1]: Finished ignition-setup.service. Feb 12 20:24:24.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.646428 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:24:24.683557 ignition[638]: Ignition 2.14.0 Feb 12 20:24:24.683599 ignition[638]: Stage: fetch-offline Feb 12 20:24:24.684522 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:24:24.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.683645 ignition[638]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:24.687000 audit: BPF prog-id=9 op=LOAD Feb 12 20:24:24.683654 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:24.683747 ignition[638]: parsed url from cmdline: "" Feb 12 20:24:24.688442 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:24.683750 ignition[638]: no config URL provided Feb 12 20:24:24.683756 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:24:24.683764 ignition[638]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:24:24.683780 ignition[638]: op(1): [started] loading QEMU firmware config module Feb 12 20:24:24.683788 ignition[638]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:24:24.689339 ignition[638]: op(1): [finished] loading QEMU firmware config module Feb 12 20:24:24.689354 ignition[638]: QEMU firmware config was not found. Ignoring... Feb 12 20:24:24.701823 ignition[638]: parsing config with SHA512: a84b57b5e8dcd5e1ca204aeaf0d6bf0d513c73ab6059605a3255f33b5dce7caabed183518cc60b624c02daa39df32a31abbcf7baa76b53f412d6206426b039f0 Feb 12 20:24:24.719577 unknown[638]: fetched base config from "system" Feb 12 20:24:24.719592 unknown[638]: fetched user config from "qemu" Feb 12 20:24:24.721354 ignition[638]: fetch-offline: fetch-offline passed Feb 12 20:24:24.722028 ignition[638]: Ignition finished successfully Feb 12 20:24:24.723856 systemd-networkd[715]: lo: Link UP Feb 12 20:24:24.723866 systemd-networkd[715]: lo: Gained carrier Feb 12 20:24:24.724407 systemd-networkd[715]: Enumeration completed Feb 12 20:24:24.724766 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:24.726019 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:24:24.726405 systemd-networkd[715]: eth0: Link UP Feb 12 20:24:24.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.726410 systemd-networkd[715]: eth0: Gained carrier Feb 12 20:24:24.728403 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:24.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.730454 systemd[1]: Reached target network.target. Feb 12 20:24:24.731483 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:24:24.733108 systemd[1]: Starting ignition-kargs.service... Feb 12 20:24:24.734647 systemd[1]: Starting iscsiuio.service... Feb 12 20:24:24.738736 systemd[1]: Started iscsiuio.service. Feb 12 20:24:24.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.740410 ignition[717]: Ignition 2.14.0 Feb 12 20:24:24.740427 ignition[717]: Stage: kargs Feb 12 20:24:24.740502 ignition[717]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:24.740510 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:24.741334 ignition[717]: kargs: kargs passed Feb 12 20:24:24.742371 systemd[1]: Starting iscsid.service... Feb 12 20:24:24.741365 ignition[717]: Ignition finished successfully Feb 12 20:24:24.745231 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:24:24.745231 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 20:24:24.745231 iscsid[726]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:24:24.745231 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:24:24.745231 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:24:24.745231 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:24:24.745231 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:24:24.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.745546 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:24:24.762991 ignition[728]: Ignition 2.14.0 Feb 12 20:24:24.746067 systemd[1]: Started iscsid.service. Feb 12 20:24:24.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.762998 ignition[728]: Stage: disks Feb 12 20:24:24.748029 systemd[1]: Finished ignition-kargs.service. Feb 12 20:24:24.763098 ignition[728]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:24.751223 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:24:24.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.763110 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:24.753132 systemd[1]: Starting ignition-disks.service... Feb 12 20:24:24.764228 ignition[728]: disks: disks passed Feb 12 20:24:24.760116 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:24:24.764266 ignition[728]: Ignition finished successfully Feb 12 20:24:24.760958 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:24:24.761669 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:24.762469 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:24.764829 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:24:24.765710 systemd[1]: Finished ignition-disks.service. Feb 12 20:24:24.766641 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:24:24.767257 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:24.768226 systemd[1]: Reached target local-fs.target. Feb 12 20:24:24.769508 systemd[1]: Reached target sysinit.target. Feb 12 20:24:24.770092 systemd[1]: Reached target basic.target. Feb 12 20:24:24.783333 systemd-fsck[749]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:24:24.771308 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:24:24.772449 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:24:24.786647 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:24:24.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.788468 systemd[1]: Mounting sysroot.mount... Feb 12 20:24:24.794305 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:24:24.794355 systemd[1]: Mounted sysroot.mount. Feb 12 20:24:24.794969 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:24:24.796840 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:24:24.797260 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:24:24.797311 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:24:24.797338 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:24:24.799109 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:24:24.800964 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:24:24.804683 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:24:24.807525 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:24:24.809998 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:24:24.812408 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:24:24.832546 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:24:24.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.833768 systemd[1]: Starting ignition-mount.service... Feb 12 20:24:24.834747 systemd[1]: Starting sysroot-boot.service... Feb 12 20:24:24.838509 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:24:24.846037 ignition[802]: INFO : Ignition 2.14.0 Feb 12 20:24:24.846037 ignition[802]: INFO : Stage: mount Feb 12 20:24:24.847322 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:24.847322 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:24.849354 ignition[802]: INFO : mount: mount passed Feb 12 20:24:24.849354 ignition[802]: INFO : Ignition finished successfully Feb 12 20:24:24.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:24.848233 systemd[1]: Finished ignition-mount.service. Feb 12 20:24:24.851865 systemd[1]: Finished sysroot-boot.service. Feb 12 20:24:24.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:25.571608 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:24:25.577793 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Feb 12 20:24:25.577823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:24:25.577836 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:24:25.577848 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:24:25.581001 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:24:25.582214 systemd[1]: Starting ignition-files.service... Feb 12 20:24:25.594510 ignition[830]: INFO : Ignition 2.14.0 Feb 12 20:24:25.594510 ignition[830]: INFO : Stage: files Feb 12 20:24:25.596328 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:25.596328 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:25.596328 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:24:25.599612 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:24:25.599612 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:24:25.599612 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:24:25.599612 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:24:25.599612 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:24:25.599612 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:24:25.599612 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:24:25.598124 unknown[830]: wrote ssh authorized keys file for user: core Feb 12 20:24:25.989707 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:24:26.121536 ignition[830]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:24:26.123669 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:24:26.123669 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:24:26.123669 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:24:26.400470 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:24:26.446424 systemd-networkd[715]: eth0: Gained IPv6LL Feb 12 20:24:26.481354 ignition[830]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:24:26.483628 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:24:26.483628 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:24:26.483628 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:24:26.547047 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:24:26.729104 ignition[830]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:24:26.729104 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:24:26.732686 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:24:26.732686 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:24:26.776692 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:24:27.245393 ignition[830]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:24:27.249240 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:24:27.249240 ignition[830]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Feb 12 20:24:27.249240 ignition[830]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(c): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(c): op(d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(c): op(d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(c): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(e): [started] processing unit "prepare-critools.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(e): op(f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(e): op(f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(e): [finished] processing unit "prepare-critools.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:24:27.270698 ignition[830]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:24:27.345383 ignition[830]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:24:27.345383 ignition[830]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:24:27.345383 ignition[830]: INFO : files: files passed Feb 12 20:24:27.345383 ignition[830]: INFO : Ignition finished successfully Feb 12 20:24:27.367162 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 12 20:24:27.367191 kernel: audit: type=1130 audit(1707769467.358:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.353823 systemd[1]: Finished ignition-files.service. Feb 12 20:24:27.360292 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:24:27.365672 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:24:27.382525 kernel: audit: type=1130 audit(1707769467.372:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.382559 kernel: audit: type=1131 audit(1707769467.372:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.366904 systemd[1]: Starting ignition-quench.service... Feb 12 20:24:27.388818 kernel: audit: type=1130 audit(1707769467.383:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.388902 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:24:27.372076 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:24:27.391362 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:24:27.372182 systemd[1]: Finished ignition-quench.service. Feb 12 20:24:27.382564 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:24:27.383913 systemd[1]: Reached target ignition-complete.target. Feb 12 20:24:27.388067 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:24:27.411850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:24:27.418872 kernel: audit: type=1130 audit(1707769467.412:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.418906 kernel: audit: type=1131 audit(1707769467.412:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.411962 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:24:27.413241 systemd[1]: Reached target initrd-fs.target. Feb 12 20:24:27.419506 systemd[1]: Reached target initrd.target. Feb 12 20:24:27.420194 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:24:27.421246 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:24:27.435664 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:24:27.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.441333 kernel: audit: type=1130 audit(1707769467.437:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.439152 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:24:27.451233 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:24:27.453301 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:24:27.455231 systemd[1]: Stopped target timers.target. Feb 12 20:24:27.458191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:24:27.459083 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:24:27.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.461322 systemd[1]: Stopped target initrd.target. Feb 12 20:24:27.464451 kernel: audit: type=1131 audit(1707769467.460:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.465968 systemd[1]: Stopped target basic.target. Feb 12 20:24:27.470912 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:24:27.472658 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:24:27.473574 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:24:27.475120 systemd[1]: Stopped target remote-fs.target. Feb 12 20:24:27.476650 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:24:27.479082 systemd[1]: Stopped target sysinit.target. Feb 12 20:24:27.480365 systemd[1]: Stopped target local-fs.target. Feb 12 20:24:27.482546 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:24:27.484029 systemd[1]: Stopped target swap.target. Feb 12 20:24:27.485339 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:24:27.486238 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:24:27.487858 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:24:27.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.492046 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:24:27.493306 kernel: audit: type=1131 audit(1707769467.487:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.493022 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:24:27.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.494869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:24:27.499398 kernel: audit: type=1131 audit(1707769467.494:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.494993 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:24:27.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.500680 systemd[1]: Stopped target paths.target. Feb 12 20:24:27.501976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:24:27.505435 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:24:27.507189 systemd[1]: Stopped target slices.target. Feb 12 20:24:27.509239 systemd[1]: Stopped target sockets.target. Feb 12 20:24:27.510637 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:24:27.510732 systemd[1]: Closed iscsid.socket. Feb 12 20:24:27.513595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:24:27.514620 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:24:27.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.516833 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:24:27.516986 systemd[1]: Stopped ignition-files.service. Feb 12 20:24:27.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.520775 systemd[1]: Stopping ignition-mount.service... Feb 12 20:24:27.522495 systemd[1]: Stopping iscsiuio.service... Feb 12 20:24:27.526351 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:24:27.527301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:24:27.530727 ignition[871]: INFO : Ignition 2.14.0 Feb 12 20:24:27.530727 ignition[871]: INFO : Stage: umount Feb 12 20:24:27.530727 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:24:27.530727 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:24:27.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.528206 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:24:27.535900 ignition[871]: INFO : umount: umount passed Feb 12 20:24:27.535900 ignition[871]: INFO : Ignition finished successfully Feb 12 20:24:27.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.530879 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:24:27.534299 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:24:27.542387 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:24:27.543462 systemd[1]: Stopped iscsiuio.service. Feb 12 20:24:27.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.547071 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:24:27.548899 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:24:27.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.549000 systemd[1]: Stopped ignition-mount.service. Feb 12 20:24:27.551681 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:24:27.551780 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:24:27.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.557425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:24:27.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.557527 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:24:27.558586 systemd[1]: Stopped target network.target. Feb 12 20:24:27.561373 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:24:27.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.561440 systemd[1]: Closed iscsiuio.socket. Feb 12 20:24:27.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.562104 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:24:27.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.562151 systemd[1]: Stopped ignition-disks.service. Feb 12 20:24:27.563509 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:24:27.563551 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:24:27.565102 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:24:27.565141 systemd[1]: Stopped ignition-setup.service. Feb 12 20:24:27.567013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:24:27.567055 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:24:27.568237 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:24:27.570570 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:24:27.573371 systemd-networkd[715]: eth0: DHCPv6 lease lost Feb 12 20:24:27.577129 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:24:27.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.577257 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:24:27.580559 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:24:27.580668 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:24:27.583455 systemd[1]: Stopping network-cleanup.service... Feb 12 20:24:27.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.588000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:24:27.584984 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:24:27.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.585362 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:24:27.586693 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:24:27.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.586739 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:24:27.588538 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:24:27.588580 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:24:27.590273 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:24:27.592926 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:24:27.593612 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:24:27.593716 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:24:27.603092 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:24:27.603313 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:24:27.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.606608 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:24:27.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.607000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:24:27.606702 systemd[1]: Stopped network-cleanup.service. Feb 12 20:24:27.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.608383 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:24:27.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.608425 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:24:27.609476 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:24:27.609511 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:24:27.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.611084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:24:27.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.611170 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:24:27.612644 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:24:27.612689 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:24:27.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:27.614159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:24:27.614214 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:24:27.617825 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:24:27.619841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:24:27.620038 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:24:27.623248 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:24:27.623309 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:24:27.624793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:24:27.624834 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:24:27.627044 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:24:27.627607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:24:27.627707 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:24:27.629252 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:24:27.632182 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:24:27.648019 systemd[1]: Switching root. Feb 12 20:24:27.670502 iscsid[726]: iscsid shutting down. Feb 12 20:24:27.671298 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Feb 12 20:24:27.671365 systemd-journald[197]: Journal stopped Feb 12 20:24:30.244677 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:24:30.244720 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:24:30.244731 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:24:30.244740 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:24:30.244749 kernel: SELinux: policy capability open_perms=1 Feb 12 20:24:30.244760 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:24:30.244775 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:24:30.244787 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:24:30.244798 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:24:30.244807 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:24:30.244816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:24:30.244826 systemd[1]: Successfully loaded SELinux policy in 53.786ms. Feb 12 20:24:30.244843 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.942ms. Feb 12 20:24:30.244854 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:30.244865 systemd[1]: Detected virtualization kvm. Feb 12 20:24:30.244877 systemd[1]: Detected architecture x86-64. Feb 12 20:24:30.244892 systemd[1]: Detected first boot. Feb 12 20:24:30.244903 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:30.244913 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:24:30.244922 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:24:30.244933 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:30.244947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:30.244959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:30.244970 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:24:30.244980 systemd[1]: Stopped iscsid.service. Feb 12 20:24:30.244991 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:24:30.245001 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:24:30.245011 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:24:30.245021 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:24:30.245031 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:24:30.245041 systemd[1]: Created slice system-getty.slice. Feb 12 20:24:30.245051 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:24:30.245062 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:24:30.245072 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:24:30.245081 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:24:30.245091 systemd[1]: Created slice user.slice. Feb 12 20:24:30.245101 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:30.245113 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:24:30.245123 systemd[1]: Set up automount boot.automount. Feb 12 20:24:30.245133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:24:30.245143 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:24:30.245162 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:24:30.245173 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:24:30.245183 systemd[1]: Reached target integritysetup.target. Feb 12 20:24:30.245192 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:30.245203 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:30.245213 systemd[1]: Reached target slices.target. Feb 12 20:24:30.245222 systemd[1]: Reached target swap.target. Feb 12 20:24:30.245232 systemd[1]: Reached target torcx.target. Feb 12 20:24:30.245244 systemd[1]: Reached target veritysetup.target. Feb 12 20:24:30.245253 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:24:30.245263 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:24:30.245292 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:30.245304 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:30.245314 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:30.245324 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:24:30.245334 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:24:30.245344 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:24:30.245356 systemd[1]: Mounting media.mount... Feb 12 20:24:30.245366 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:30.245376 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:24:30.245386 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:24:30.245396 systemd[1]: Mounting tmp.mount... Feb 12 20:24:30.245405 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:24:30.245415 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:24:30.245426 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:30.245436 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:24:30.245450 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:24:30.245460 systemd[1]: Starting modprobe@drm.service... Feb 12 20:24:30.245470 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:24:30.245481 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:24:30.245490 systemd[1]: Starting modprobe@loop.service... Feb 12 20:24:30.245501 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:24:30.245511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:24:30.245521 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:24:30.245530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:24:30.245541 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:24:30.245551 systemd[1]: Stopped systemd-journald.service. Feb 12 20:24:30.245561 kernel: fuse: init (API version 7.34) Feb 12 20:24:30.245570 kernel: loop: module loaded Feb 12 20:24:30.245579 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:30.245589 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:30.245599 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:24:30.245609 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:24:30.245619 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:30.245629 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:24:30.245640 systemd[1]: Stopped verity-setup.service. Feb 12 20:24:30.245650 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:30.245663 systemd-journald[981]: Journal started Feb 12 20:24:30.245701 systemd-journald[981]: Runtime Journal (/run/log/journal/212be377c0da40b4bb58857894e4a86f) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:24:27.781000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:24:28.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:28.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:28.116000 audit: BPF prog-id=10 op=LOAD Feb 12 20:24:28.116000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:24:28.116000 audit: BPF prog-id=11 op=LOAD Feb 12 20:24:28.116000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:24:28.147000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:24:28.147000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:28.147000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:28.148000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:24:28.148000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b5 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:28.148000 audit: CWD cwd="/" Feb 12 20:24:28.148000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:28.148000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:28.148000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:30.145000 audit: BPF prog-id=12 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:24:30.145000 audit: BPF prog-id=13 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=14 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:24:30.145000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:24:30.145000 audit: BPF prog-id=15 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:24:30.145000 audit: BPF prog-id=16 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=17 op=LOAD Feb 12 20:24:30.145000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:24:30.146000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:24:30.146000 audit: BPF prog-id=18 op=LOAD Feb 12 20:24:30.146000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:24:30.146000 audit: BPF prog-id=19 op=LOAD Feb 12 20:24:30.146000 audit: BPF prog-id=20 op=LOAD Feb 12 20:24:30.146000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:24:30.146000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:24:30.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.156000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:24:30.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.228000 audit: BPF prog-id=21 op=LOAD Feb 12 20:24:30.228000 audit: BPF prog-id=22 op=LOAD Feb 12 20:24:30.228000 audit: BPF prog-id=23 op=LOAD Feb 12 20:24:30.228000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:24:30.228000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:24:30.243000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:24:30.243000 audit[981]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe99f40c10 a2=4000 a3=7ffe99f40cac items=0 ppid=1 pid=981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:30.243000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:24:30.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:28.146571 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:30.144018 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:24:28.146764 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:30.144028 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:24:28.146781 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:30.147794 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:24:28.146811 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:24:28.146821 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:24:28.146850 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:24:28.146864 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:24:28.147065 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:24:28.147098 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:28.147111 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:28.147409 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:24:28.147443 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:24:28.147460 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:24:28.147474 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:24:28.147489 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:24:28.147502 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:24:29.902027 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:29.902258 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:29.902375 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:29.902518 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:29.902561 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:24:29.902611 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-02-12T20:24:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:24:30.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.249304 systemd[1]: Started systemd-journald.service. Feb 12 20:24:30.249368 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:24:30.249984 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:24:30.250801 systemd[1]: Mounted media.mount. Feb 12 20:24:30.251605 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:24:30.252494 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:24:30.253416 systemd[1]: Mounted tmp.mount. Feb 12 20:24:30.254509 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:30.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.255612 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:24:30.255902 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:24:30.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.257179 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:24:30.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.258250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:24:30.258483 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:24:30.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.259638 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:24:30.259890 systemd[1]: Finished modprobe@drm.service. Feb 12 20:24:30.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.260980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:24:30.261221 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:24:30.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.262574 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:24:30.262831 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:24:30.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.263983 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:24:30.264184 systemd[1]: Finished modprobe@loop.service. Feb 12 20:24:30.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.265570 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:30.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.266888 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:24:30.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.268170 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:24:30.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.269753 systemd[1]: Reached target network-pre.target. Feb 12 20:24:30.272104 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:24:30.274617 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:24:30.275399 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:24:30.277192 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:24:30.279128 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:24:30.280035 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:24:30.281507 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:24:30.282453 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:24:30.283894 systemd-journald[981]: Time spent on flushing to /var/log/journal/212be377c0da40b4bb58857894e4a86f is 13.628ms for 1116 entries. Feb 12 20:24:30.283894 systemd-journald[981]: System Journal (/var/log/journal/212be377c0da40b4bb58857894e4a86f) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:24:30.310658 systemd-journald[981]: Received client request to flush runtime journal. Feb 12 20:24:30.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.283826 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:30.287138 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:24:30.290250 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:30.291407 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:24:30.311193 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:24:30.292499 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:24:30.293567 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:24:30.294755 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:24:30.296871 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:24:30.301552 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:30.305433 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:24:30.308333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:30.311846 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:24:30.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.322655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:30.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.694204 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:24:30.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.695000 audit: BPF prog-id=24 op=LOAD Feb 12 20:24:30.695000 audit: BPF prog-id=25 op=LOAD Feb 12 20:24:30.695000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:24:30.695000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:24:30.696111 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:30.710530 systemd-udevd[1013]: Using default interface naming scheme 'v252'. Feb 12 20:24:30.720956 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:30.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.723000 audit: BPF prog-id=26 op=LOAD Feb 12 20:24:30.729000 audit: BPF prog-id=27 op=LOAD Feb 12 20:24:30.725754 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:30.730000 audit: BPF prog-id=28 op=LOAD Feb 12 20:24:30.730000 audit: BPF prog-id=29 op=LOAD Feb 12 20:24:30.731813 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:24:30.743107 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:24:30.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.756237 systemd[1]: Started systemd-userdbd.service. Feb 12 20:24:30.772315 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:24:30.776990 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:24:30.798196 systemd-networkd[1027]: lo: Link UP Feb 12 20:24:30.798209 systemd-networkd[1027]: lo: Gained carrier Feb 12 20:24:30.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.798566 systemd-networkd[1027]: Enumeration completed Feb 12 20:24:30.798659 systemd-networkd[1027]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:30.798671 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:30.800012 systemd-networkd[1027]: eth0: Link UP Feb 12 20:24:30.800023 systemd-networkd[1027]: eth0: Gained carrier Feb 12 20:24:30.803000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:24:30.803000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556137069df0 a1=32194 a2=7f9ee384fbc5 a3=5 items=108 ppid=1013 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:30.803000 audit: CWD cwd="/" Feb 12 20:24:30.816330 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:24:30.803000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=1 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=2 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=3 name=(null) inode=10211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=4 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=5 name=(null) inode=10212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=6 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=7 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=8 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=9 name=(null) inode=10214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=10 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=11 name=(null) inode=10215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=12 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=13 name=(null) inode=10216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=14 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=15 name=(null) inode=10217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=16 name=(null) inode=10213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=17 name=(null) inode=10218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=18 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=19 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=20 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=21 name=(null) inode=10220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=22 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=23 name=(null) inode=10221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=24 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=25 name=(null) inode=10222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=26 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=27 name=(null) inode=10223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=28 name=(null) inode=10219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=29 name=(null) inode=10224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=30 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=31 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=32 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=33 name=(null) inode=10226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=34 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=35 name=(null) inode=10227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=36 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=37 name=(null) inode=10228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=38 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=39 name=(null) inode=10229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=40 name=(null) inode=10225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=41 name=(null) inode=10230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=42 name=(null) inode=10210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=43 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=44 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=45 name=(null) inode=10232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=46 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=47 name=(null) inode=10233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=48 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=49 name=(null) inode=10234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=50 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=51 name=(null) inode=10235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=52 name=(null) inode=10231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=53 name=(null) inode=10236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=55 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=56 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=57 name=(null) inode=10238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=58 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=59 name=(null) inode=10239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=60 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=61 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=62 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=63 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=64 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=65 name=(null) inode=16386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=66 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=67 name=(null) inode=16387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=68 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=69 name=(null) inode=16388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=70 name=(null) inode=10240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=71 name=(null) inode=16389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=72 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=73 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=74 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=75 name=(null) inode=16391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=76 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=77 name=(null) inode=16392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=78 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=79 name=(null) inode=16393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=80 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=81 name=(null) inode=16394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=82 name=(null) inode=16390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=83 name=(null) inode=16395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=84 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=85 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=86 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=87 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=88 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=89 name=(null) inode=16398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=90 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=91 name=(null) inode=16399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=92 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=93 name=(null) inode=16400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=94 name=(null) inode=16396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=95 name=(null) inode=16401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=96 name=(null) inode=10237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=97 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=98 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=99 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=100 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=101 name=(null) inode=16404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=102 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=103 name=(null) inode=16405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=104 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=105 name=(null) inode=16406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=106 name=(null) inode=16402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PATH item=107 name=(null) inode=16407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:30.803000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:24:30.814756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:24:30.819413 systemd-networkd[1027]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:24:30.826303 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:24:30.828301 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:24:30.877703 kernel: kvm: Nested Virtualization enabled Feb 12 20:24:30.877744 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:24:30.877758 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:24:30.877784 kernel: SVM: Virtual GIF supported Feb 12 20:24:30.892297 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:24:30.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.911609 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:24:30.913372 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:24:30.919605 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:30.946454 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:24:30.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.947200 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:30.948732 systemd[1]: Starting lvm2-activation.service... Feb 12 20:24:30.951885 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:30.979769 systemd[1]: Finished lvm2-activation.service. Feb 12 20:24:30.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.980470 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:30.981075 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:24:30.981098 systemd[1]: Reached target local-fs.target. Feb 12 20:24:30.981687 systemd[1]: Reached target machines.target. Feb 12 20:24:30.983069 systemd[1]: Starting ldconfig.service... Feb 12 20:24:30.983851 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:24:30.983913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:30.984951 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:24:30.986418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:24:30.988067 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:24:30.988908 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:30.988944 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:30.989754 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:24:30.993018 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:24:30.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:30.997844 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Feb 12 20:24:30.999224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:24:30.999263 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:24:30.999881 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:24:31.001394 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:24:31.032507 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Feb 12 20:24:31.032507 systemd-fsck[1060]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:24:31.033755 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:24:31.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.035994 systemd[1]: Mounting boot.mount... Feb 12 20:24:31.043308 systemd[1]: Mounted boot.mount. Feb 12 20:24:31.054741 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:24:31.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.082010 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:24:31.435986 systemd[1]: Finished ldconfig.service. Feb 12 20:24:31.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.440343 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:24:31.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.442199 systemd[1]: Starting audit-rules.service... Feb 12 20:24:31.444033 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:24:31.445822 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:24:31.447000 audit: BPF prog-id=30 op=LOAD Feb 12 20:24:31.448536 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:31.450000 audit: BPF prog-id=31 op=LOAD Feb 12 20:24:31.451603 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:24:31.453145 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:24:31.454950 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:24:31.455582 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:24:31.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.456000 audit[1074]: SYSTEM_BOOT pid=1074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.456789 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:24:31.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.460171 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:24:31.461451 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:24:31.462685 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:24:31.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.464579 systemd[1]: Starting systemd-update-done.service... Feb 12 20:24:31.470423 systemd[1]: Finished systemd-update-done.service. Feb 12 20:24:31.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:31.476000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:24:31.476000 audit[1084]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffab3ac160 a2=420 a3=0 items=0 ppid=1063 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:31.476000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:24:31.476813 augenrules[1084]: No rules Feb 12 20:24:31.477344 systemd[1]: Finished audit-rules.service. Feb 12 20:24:31.517869 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:24:31.518802 systemd[1]: Reached target time-set.target. Feb 12 20:24:31.518827 systemd-timesyncd[1073]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:24:31.518863 systemd-timesyncd[1073]: Initial clock synchronization to Mon 2024-02-12 20:24:31.892008 UTC. Feb 12 20:24:31.519225 systemd-resolved[1069]: Positive Trust Anchors: Feb 12 20:24:31.519243 systemd-resolved[1069]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:31.519296 systemd-resolved[1069]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:31.525644 systemd-resolved[1069]: Defaulting to hostname 'linux'. Feb 12 20:24:31.526948 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:31.527692 systemd[1]: Reached target network.target. Feb 12 20:24:31.528253 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:31.528866 systemd[1]: Reached target sysinit.target. Feb 12 20:24:31.529509 systemd[1]: Started motdgen.path. Feb 12 20:24:31.530031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:24:31.530917 systemd[1]: Started logrotate.timer. Feb 12 20:24:31.531532 systemd[1]: Started mdadm.timer. Feb 12 20:24:31.532026 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:24:31.532651 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:24:31.532678 systemd[1]: Reached target paths.target. Feb 12 20:24:31.533208 systemd[1]: Reached target timers.target. Feb 12 20:24:31.534043 systemd[1]: Listening on dbus.socket. Feb 12 20:24:31.535506 systemd[1]: Starting docker.socket... Feb 12 20:24:31.537858 systemd[1]: Listening on sshd.socket. Feb 12 20:24:31.538510 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:31.538818 systemd[1]: Listening on docker.socket. Feb 12 20:24:31.539435 systemd[1]: Reached target sockets.target. Feb 12 20:24:31.540009 systemd[1]: Reached target basic.target. Feb 12 20:24:31.540681 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:31.540703 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:31.541470 systemd[1]: Starting containerd.service... Feb 12 20:24:31.542818 systemd[1]: Starting dbus.service... Feb 12 20:24:31.544089 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:24:31.545601 systemd[1]: Starting extend-filesystems.service... Feb 12 20:24:31.546404 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:24:31.549851 jq[1094]: false Feb 12 20:24:31.547249 systemd[1]: Starting motdgen.service... Feb 12 20:24:31.548976 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:24:31.550719 systemd[1]: Starting prepare-critools.service... Feb 12 20:24:31.552263 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:24:31.553695 systemd[1]: Starting sshd-keygen.service... Feb 12 20:24:31.556740 systemd[1]: Starting systemd-logind.service... Feb 12 20:24:31.559183 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:31.559252 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:24:31.559593 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:24:31.560082 systemd[1]: Starting update-engine.service... Feb 12 20:24:31.560341 dbus-daemon[1093]: [system] SELinux support is enabled Feb 12 20:24:31.561899 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:24:31.565107 jq[1113]: true Feb 12 20:24:31.565861 systemd[1]: Started dbus.service. Feb 12 20:24:31.568458 extend-filesystems[1095]: Found sr0 Feb 12 20:24:31.569152 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:24:31.569319 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:24:31.569552 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:24:31.569675 systemd[1]: Finished motdgen.service. Feb 12 20:24:31.571428 extend-filesystems[1095]: Found vda Feb 12 20:24:31.571420 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:24:31.572073 extend-filesystems[1095]: Found vda1 Feb 12 20:24:31.572073 extend-filesystems[1095]: Found vda2 Feb 12 20:24:31.572073 extend-filesystems[1095]: Found vda3 Feb 12 20:24:31.572073 extend-filesystems[1095]: Found usr Feb 12 20:24:31.572015 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:24:31.575198 extend-filesystems[1095]: Found vda4 Feb 12 20:24:31.575198 extend-filesystems[1095]: Found vda6 Feb 12 20:24:31.575198 extend-filesystems[1095]: Found vda7 Feb 12 20:24:31.575198 extend-filesystems[1095]: Found vda9 Feb 12 20:24:31.575198 extend-filesystems[1095]: Checking size of /dev/vda9 Feb 12 20:24:31.578174 tar[1116]: ./ Feb 12 20:24:31.578174 tar[1116]: ./macvlan Feb 12 20:24:31.581457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:24:31.581491 systemd[1]: Reached target system-config.target. Feb 12 20:24:31.582181 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:24:31.582197 systemd[1]: Reached target user-config.target. Feb 12 20:24:31.583858 jq[1120]: true Feb 12 20:24:31.584175 tar[1117]: crictl Feb 12 20:24:31.599822 extend-filesystems[1095]: Resized partition /dev/vda9 Feb 12 20:24:31.609354 extend-filesystems[1145]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:24:31.614306 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:24:31.617702 update_engine[1110]: I0212 20:24:31.617532 1110 main.cc:92] Flatcar Update Engine starting Feb 12 20:24:31.619162 systemd[1]: Started update-engine.service. Feb 12 20:24:31.619297 update_engine[1110]: I0212 20:24:31.619199 1110 update_check_scheduler.cc:74] Next update check in 4m42s Feb 12 20:24:31.621492 systemd[1]: Started locksmithd.service. Feb 12 20:24:31.640949 env[1123]: time="2024-02-12T20:24:31.640329255Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:24:31.642304 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:24:31.660453 env[1123]: time="2024-02-12T20:24:31.660159160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:24:31.660453 env[1123]: time="2024-02-12T20:24:31.660319541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.660347 systemd-logind[1107]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:24:31.660368 systemd-logind[1107]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:24:31.662166 env[1123]: time="2024-02-12T20:24:31.661745606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:31.662166 env[1123]: time="2024-02-12T20:24:31.661997188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.662301 systemd-logind[1107]: New seat seat0. Feb 12 20:24:31.662911 env[1123]: time="2024-02-12T20:24:31.662457201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:31.662911 env[1123]: time="2024-02-12T20:24:31.662488279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.662911 env[1123]: time="2024-02-12T20:24:31.662507675Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:24:31.662911 env[1123]: time="2024-02-12T20:24:31.662520980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.662911 env[1123]: time="2024-02-12T20:24:31.662610198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.663062 env[1123]: time="2024-02-12T20:24:31.662986473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:31.663690 env[1123]: time="2024-02-12T20:24:31.663358692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:31.663690 env[1123]: time="2024-02-12T20:24:31.663386945Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:24:31.663690 env[1123]: time="2024-02-12T20:24:31.663444272Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:24:31.663690 env[1123]: time="2024-02-12T20:24:31.663459641Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:24:31.665745 extend-filesystems[1145]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:24:31.665745 extend-filesystems[1145]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:24:31.665745 extend-filesystems[1145]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:24:31.674309 extend-filesystems[1095]: Resized filesystem in /dev/vda9 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672132118Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672163547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672177794Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672210184Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672227026Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672492163Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672508684Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672524644Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672539522Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672554120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.672573446Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.673260825Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.673397251Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:24:31.675012 env[1123]: time="2024-02-12T20:24:31.673504272Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:24:31.666190 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:24:31.675450 tar[1116]: ./static Feb 12 20:24:31.675475 bash[1147]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673770251Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673798484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673814013Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673875919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673889745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673902258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673914892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673929549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673945479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673960467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.673976518Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.674092174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.674109868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.675637 env[1123]: time="2024-02-12T20:24:31.674123684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.666381 systemd[1]: Finished extend-filesystems.service. Feb 12 20:24:31.676016 env[1123]: time="2024-02-12T20:24:31.674136488Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:24:31.676016 env[1123]: time="2024-02-12T20:24:31.674151686Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:24:31.676016 env[1123]: time="2024-02-12T20:24:31.674163488Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:24:31.676016 env[1123]: time="2024-02-12T20:24:31.674182654Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:24:31.676016 env[1123]: time="2024-02-12T20:24:31.674218782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:24:31.668565 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:24:31.672069 systemd[1]: Started systemd-logind.service. Feb 12 20:24:31.676220 env[1123]: time="2024-02-12T20:24:31.674463361Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:24:31.676220 env[1123]: time="2024-02-12T20:24:31.674540916Z" level=info msg="Connect containerd service" Feb 12 20:24:31.676220 env[1123]: time="2024-02-12T20:24:31.674596611Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:24:31.676220 env[1123]: time="2024-02-12T20:24:31.676090723Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676255392Z" level=info msg="Start subscribing containerd event" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676310195Z" level=info msg="Start recovering state" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676364557Z" level=info msg="Start event monitor" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676377441Z" level=info msg="Start snapshots syncer" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676387460Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676397248Z" level=info msg="Start streaming server" Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676602974Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:24:31.680031 env[1123]: time="2024-02-12T20:24:31.676637348Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:24:31.676779 systemd[1]: Started containerd.service. Feb 12 20:24:31.692213 env[1123]: time="2024-02-12T20:24:31.692153432Z" level=info msg="containerd successfully booted in 0.064146s" Feb 12 20:24:31.695694 tar[1116]: ./vlan Feb 12 20:24:31.698012 locksmithd[1148]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:24:31.726500 tar[1116]: ./portmap Feb 12 20:24:31.754781 tar[1116]: ./host-local Feb 12 20:24:31.780122 tar[1116]: ./vrf Feb 12 20:24:31.807027 tar[1116]: ./bridge Feb 12 20:24:31.839595 tar[1116]: ./tuning Feb 12 20:24:31.865769 tar[1116]: ./firewall Feb 12 20:24:31.899527 tar[1116]: ./host-device Feb 12 20:24:31.928651 tar[1116]: ./sbr Feb 12 20:24:31.950513 sshd_keygen[1114]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:24:31.956575 tar[1116]: ./loopback Feb 12 20:24:31.970634 systemd[1]: Finished sshd-keygen.service. Feb 12 20:24:31.972857 systemd[1]: Starting issuegen.service... Feb 12 20:24:31.977719 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:24:31.977825 systemd[1]: Finished issuegen.service. Feb 12 20:24:31.979406 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:24:31.984577 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:24:31.985882 tar[1116]: ./dhcp Feb 12 20:24:31.986274 systemd[1]: Started getty@tty1.service. Feb 12 20:24:31.987754 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:24:31.988670 systemd[1]: Reached target getty.target. Feb 12 20:24:32.058885 tar[1116]: ./ptp Feb 12 20:24:32.078425 systemd-networkd[1027]: eth0: Gained IPv6LL Feb 12 20:24:32.087150 systemd[1]: Finished prepare-critools.service. Feb 12 20:24:32.090216 tar[1116]: ./ipvlan Feb 12 20:24:32.118288 tar[1116]: ./bandwidth Feb 12 20:24:32.152888 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:24:32.154169 systemd[1]: Reached target multi-user.target. Feb 12 20:24:32.156380 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:24:32.163013 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:24:32.163167 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:24:32.164242 systemd[1]: Startup finished in 519ms (kernel) + 5.034s (initrd) + 4.451s (userspace) = 10.005s. Feb 12 20:24:41.161341 systemd[1]: Created slice system-sshd.slice. Feb 12 20:24:41.162225 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:35920.service. Feb 12 20:24:41.204082 sshd[1177]: Accepted publickey for core from 10.0.0.1 port 35920 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:41.205192 sshd[1177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.212664 systemd-logind[1107]: New session 1 of user core. Feb 12 20:24:41.213472 systemd[1]: Created slice user-500.slice. Feb 12 20:24:41.214324 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:24:41.220612 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:24:41.221813 systemd[1]: Starting user@500.service... Feb 12 20:24:41.224068 (systemd)[1180]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.291659 systemd[1180]: Queued start job for default target default.target. Feb 12 20:24:41.292082 systemd[1180]: Reached target paths.target. Feb 12 20:24:41.292098 systemd[1180]: Reached target sockets.target. Feb 12 20:24:41.292109 systemd[1180]: Reached target timers.target. Feb 12 20:24:41.292119 systemd[1180]: Reached target basic.target. Feb 12 20:24:41.292150 systemd[1180]: Reached target default.target. Feb 12 20:24:41.292170 systemd[1180]: Startup finished in 63ms. Feb 12 20:24:41.292228 systemd[1]: Started user@500.service. Feb 12 20:24:41.293203 systemd[1]: Started session-1.scope. Feb 12 20:24:41.344980 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:35922.service. Feb 12 20:24:41.390511 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 35922 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:41.391692 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.395092 systemd-logind[1107]: New session 2 of user core. Feb 12 20:24:41.395778 systemd[1]: Started session-2.scope. Feb 12 20:24:41.450089 sshd[1189]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:41.452319 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:35922.service: Deactivated successfully. Feb 12 20:24:41.452824 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:24:41.453272 systemd-logind[1107]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:24:41.454437 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:35928.service. Feb 12 20:24:41.454956 systemd-logind[1107]: Removed session 2. Feb 12 20:24:41.494535 sshd[1195]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:41.495551 sshd[1195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.498605 systemd-logind[1107]: New session 3 of user core. Feb 12 20:24:41.499350 systemd[1]: Started session-3.scope. Feb 12 20:24:41.548404 sshd[1195]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:41.550701 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:35928.service: Deactivated successfully. Feb 12 20:24:41.551156 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:24:41.551590 systemd-logind[1107]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:24:41.552387 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:35944.service. Feb 12 20:24:41.553060 systemd-logind[1107]: Removed session 3. Feb 12 20:24:41.592896 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 35944 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:41.593850 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.596766 systemd-logind[1107]: New session 4 of user core. Feb 12 20:24:41.597466 systemd[1]: Started session-4.scope. Feb 12 20:24:41.649781 sshd[1201]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:41.653142 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:35944.service: Deactivated successfully. Feb 12 20:24:41.653709 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:24:41.654200 systemd-logind[1107]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:24:41.655353 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:35952.service. Feb 12 20:24:41.655929 systemd-logind[1107]: Removed session 4. Feb 12 20:24:41.694076 sshd[1207]: Accepted publickey for core from 10.0.0.1 port 35952 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:41.695028 sshd[1207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:41.698216 systemd-logind[1107]: New session 5 of user core. Feb 12 20:24:41.698988 systemd[1]: Started session-5.scope. Feb 12 20:24:41.753123 sudo[1211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:24:41.753276 sudo[1211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:42.274722 systemd[1]: Reloading. Feb 12 20:24:42.327539 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2024-02-12T20:24:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:42.327575 /usr/lib/systemd/system-generators/torcx-generator[1241]: time="2024-02-12T20:24:42Z" level=info msg="torcx already run" Feb 12 20:24:42.385606 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:42.385621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:42.403999 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:42.469204 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:24:42.474392 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:24:42.474868 systemd[1]: Reached target network-online.target. Feb 12 20:24:42.476161 systemd[1]: Started kubelet.service. Feb 12 20:24:42.485335 systemd[1]: Starting coreos-metadata.service... Feb 12 20:24:42.491217 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 20:24:42.491374 systemd[1]: Finished coreos-metadata.service. Feb 12 20:24:42.534107 kubelet[1282]: E0212 20:24:42.533956 1282 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:24:42.536044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:42.536152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:42.657895 systemd[1]: Stopped kubelet.service. Feb 12 20:24:42.671077 systemd[1]: Reloading. Feb 12 20:24:42.729694 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2024-02-12T20:24:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:42.729719 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2024-02-12T20:24:42Z" level=info msg="torcx already run" Feb 12 20:24:42.783859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:42.783876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:42.802375 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:42.869664 systemd[1]: Started kubelet.service. Feb 12 20:24:42.903521 kubelet[1390]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:42.903521 kubelet[1390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:42.903852 kubelet[1390]: I0212 20:24:42.903540 1390 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:42.904630 kubelet[1390]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:42.904630 kubelet[1390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:43.186607 kubelet[1390]: I0212 20:24:43.186526 1390 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:24:43.186721 kubelet[1390]: I0212 20:24:43.186549 1390 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:43.187548 kubelet[1390]: I0212 20:24:43.187530 1390 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:24:43.189549 kubelet[1390]: I0212 20:24:43.189532 1390 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:43.192537 kubelet[1390]: I0212 20:24:43.192524 1390 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:43.192703 kubelet[1390]: I0212 20:24:43.192691 1390 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:43.192751 kubelet[1390]: I0212 20:24:43.192744 1390 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:24:43.192827 kubelet[1390]: I0212 20:24:43.192760 1390 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:24:43.192827 kubelet[1390]: I0212 20:24:43.192769 1390 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:24:43.192874 kubelet[1390]: I0212 20:24:43.192839 1390 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:43.195108 kubelet[1390]: I0212 20:24:43.195093 1390 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:24:43.195184 kubelet[1390]: I0212 20:24:43.195112 1390 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:43.195184 kubelet[1390]: I0212 20:24:43.195133 1390 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:24:43.195184 kubelet[1390]: I0212 20:24:43.195146 1390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:43.195268 kubelet[1390]: E0212 20:24:43.195224 1390 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:43.195383 kubelet[1390]: E0212 20:24:43.195367 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:43.195601 kubelet[1390]: I0212 20:24:43.195574 1390 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:43.195837 kubelet[1390]: W0212 20:24:43.195815 1390 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:24:43.196228 kubelet[1390]: I0212 20:24:43.196210 1390 server.go:1186] "Started kubelet" Feb 12 20:24:43.196642 kubelet[1390]: I0212 20:24:43.196624 1390 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:43.196879 kubelet[1390]: E0212 20:24:43.196865 1390 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:43.196966 kubelet[1390]: E0212 20:24:43.196950 1390 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:43.198283 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:24:43.198379 kubelet[1390]: I0212 20:24:43.198362 1390 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:24:43.198436 kubelet[1390]: I0212 20:24:43.198390 1390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:43.198671 kubelet[1390]: I0212 20:24:43.198505 1390 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:24:43.200011 kubelet[1390]: I0212 20:24:43.199996 1390 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:43.200405 kubelet[1390]: E0212 20:24:43.200311 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfc295012", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 196190738, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 196190738, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.200532 kubelet[1390]: E0212 20:24:43.200521 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:43.200618 kubelet[1390]: W0212 20:24:43.200600 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:43.200702 kubelet[1390]: E0212 20:24:43.200631 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:43.200702 kubelet[1390]: W0212 20:24:43.200636 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:43.200702 kubelet[1390]: E0212 20:24:43.200649 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:43.200809 kubelet[1390]: W0212 20:24:43.200786 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:43.200844 kubelet[1390]: E0212 20:24:43.200811 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:43.203190 kubelet[1390]: E0212 20:24:43.202781 1390 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:43.203190 kubelet[1390]: E0212 20:24:43.202814 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfc34b8d2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 196938450, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 196938450, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.218795 kubelet[1390]: I0212 20:24:43.218771 1390 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:43.218978 kubelet[1390]: I0212 20:24:43.218963 1390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:43.219066 kubelet[1390]: I0212 20:24:43.219052 1390 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:43.219206 kubelet[1390]: E0212 20:24:43.219056 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.219713 kubelet[1390]: E0212 20:24:43.219661 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.220540 kubelet[1390]: E0212 20:24:43.220494 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.221977 kubelet[1390]: I0212 20:24:43.221963 1390 policy_none.go:49] "None policy: Start" Feb 12 20:24:43.222436 kubelet[1390]: I0212 20:24:43.222415 1390 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:43.222494 kubelet[1390]: I0212 20:24:43.222436 1390 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:43.227943 systemd[1]: Created slice kubepods.slice. Feb 12 20:24:43.230898 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:24:43.243060 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:24:43.243992 kubelet[1390]: I0212 20:24:43.243977 1390 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:43.244153 kubelet[1390]: I0212 20:24:43.244132 1390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:43.244702 kubelet[1390]: E0212 20:24:43.244684 1390 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.79\" not found" Feb 12 20:24:43.251636 kubelet[1390]: E0212 20:24:43.251275 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bff688a89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 250666121, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 250666121, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.301296 kubelet[1390]: I0212 20:24:43.301265 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:43.302486 kubelet[1390]: E0212 20:24:43.302472 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:43.302605 kubelet[1390]: E0212 20:24:43.302466 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 301222054, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.303257 kubelet[1390]: E0212 20:24:43.303213 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 301226703, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.303834 kubelet[1390]: E0212 20:24:43.303783 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 301229325, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.309630 kubelet[1390]: I0212 20:24:43.309614 1390 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:24:43.323948 kubelet[1390]: I0212 20:24:43.323930 1390 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:24:43.323948 kubelet[1390]: I0212 20:24:43.323950 1390 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:24:43.324061 kubelet[1390]: I0212 20:24:43.323967 1390 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:24:43.324061 kubelet[1390]: E0212 20:24:43.324017 1390 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:24:43.324749 kubelet[1390]: W0212 20:24:43.324732 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:43.324749 kubelet[1390]: E0212 20:24:43.324750 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:43.404332 kubelet[1390]: E0212 20:24:43.404295 1390 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:43.503886 kubelet[1390]: I0212 20:24:43.503789 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:43.504793 kubelet[1390]: E0212 20:24:43.504766 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:43.505052 kubelet[1390]: E0212 20:24:43.504902 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 503751474, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.505646 kubelet[1390]: E0212 20:24:43.505601 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 503761478, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.597869 kubelet[1390]: E0212 20:24:43.597794 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 503765063, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.805451 kubelet[1390]: E0212 20:24:43.805341 1390 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:43.905424 kubelet[1390]: I0212 20:24:43.905405 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:43.906406 kubelet[1390]: E0212 20:24:43.906319 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 905371430, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:43.906534 kubelet[1390]: E0212 20:24:43.906420 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:43.997501 kubelet[1390]: E0212 20:24:43.997444 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 905378113, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:44.049751 kubelet[1390]: W0212 20:24:44.049727 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:44.049804 kubelet[1390]: E0212 20:24:44.049753 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:44.063727 kubelet[1390]: W0212 20:24:44.063664 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:44.063727 kubelet[1390]: E0212 20:24:44.063681 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:44.186446 kubelet[1390]: W0212 20:24:44.186422 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:44.186446 kubelet[1390]: E0212 20:24:44.186446 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:44.195726 kubelet[1390]: E0212 20:24:44.195708 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:44.197539 kubelet[1390]: E0212 20:24:44.197480 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 905382366, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:44.473742 kubelet[1390]: W0212 20:24:44.473624 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:44.473742 kubelet[1390]: E0212 20:24:44.473665 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:44.607350 kubelet[1390]: E0212 20:24:44.607304 1390 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:44.707774 kubelet[1390]: I0212 20:24:44.707737 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:44.708991 kubelet[1390]: E0212 20:24:44.708968 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:44.709062 kubelet[1390]: E0212 20:24:44.708970 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 44, 707693251, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:44.709701 kubelet[1390]: E0212 20:24:44.709648 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 44, 707701574, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:44.798596 kubelet[1390]: E0212 20:24:44.798445 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 44, 707705589, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:45.196657 kubelet[1390]: E0212 20:24:45.196545 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:45.958408 kubelet[1390]: W0212 20:24:45.958368 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:45.958408 kubelet[1390]: E0212 20:24:45.958399 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:46.180546 kubelet[1390]: W0212 20:24:46.180425 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:46.180546 kubelet[1390]: E0212 20:24:46.180481 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:46.196755 kubelet[1390]: E0212 20:24:46.196721 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:46.208377 kubelet[1390]: E0212 20:24:46.208358 1390 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:46.310225 kubelet[1390]: I0212 20:24:46.309906 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:46.311052 kubelet[1390]: E0212 20:24:46.311029 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:46.311133 kubelet[1390]: E0212 20:24:46.311036 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 46, 309853824, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:46.311933 kubelet[1390]: E0212 20:24:46.311885 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 46, 309865217, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:46.312582 kubelet[1390]: E0212 20:24:46.312531 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 46, 309869516, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:46.398346 kubelet[1390]: W0212 20:24:46.398272 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:46.398346 kubelet[1390]: E0212 20:24:46.398330 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:46.416985 kubelet[1390]: W0212 20:24:46.416906 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:46.416985 kubelet[1390]: E0212 20:24:46.416951 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:47.196858 kubelet[1390]: E0212 20:24:47.196803 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:48.197557 kubelet[1390]: E0212 20:24:48.197500 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:49.198458 kubelet[1390]: E0212 20:24:49.198410 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:49.331438 kubelet[1390]: W0212 20:24:49.331403 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:49.331438 kubelet[1390]: E0212 20:24:49.331426 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:49.409876 kubelet[1390]: E0212 20:24:49.409844 1390 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.79" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:49.512113 kubelet[1390]: I0212 20:24:49.512011 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:49.513029 kubelet[1390]: E0212 20:24:49.512994 1390 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.79" Feb 12 20:24:49.513177 kubelet[1390]: E0212 20:24:49.513116 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79c8ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.79 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218241773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 49, 511974772, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79c8ed" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:49.513867 kubelet[1390]: E0212 20:24:49.513803 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79e9a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.79 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218250148, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 49, 511982454, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79e9a4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:49.514505 kubelet[1390]: E0212 20:24:49.514468 1390 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79.17b3374bfd79f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.79", UID:"10.0.0.79", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.79 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.79"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 43, 218252446, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 49, 511984557, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.79.17b3374bfd79f29e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:50.011096 kubelet[1390]: W0212 20:24:50.010990 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:50.011096 kubelet[1390]: E0212 20:24:50.011025 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:50.199465 kubelet[1390]: E0212 20:24:50.199398 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:50.303731 kubelet[1390]: W0212 20:24:50.303624 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:50.303731 kubelet[1390]: E0212 20:24:50.303658 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.79" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:51.199690 kubelet[1390]: E0212 20:24:51.199623 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:51.502159 kubelet[1390]: W0212 20:24:51.502064 1390 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:51.502159 kubelet[1390]: E0212 20:24:51.502097 1390 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:52.200419 kubelet[1390]: E0212 20:24:52.200345 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:53.190305 kubelet[1390]: I0212 20:24:53.190203 1390 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:24:53.200544 kubelet[1390]: E0212 20:24:53.200505 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:53.245079 kubelet[1390]: E0212 20:24:53.245007 1390 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.79\" not found" Feb 12 20:24:53.542123 kubelet[1390]: E0212 20:24:53.542002 1390 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.79" not found Feb 12 20:24:54.200981 kubelet[1390]: E0212 20:24:54.200928 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:54.604151 kubelet[1390]: E0212 20:24:54.604043 1390 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.79" not found Feb 12 20:24:55.201440 kubelet[1390]: E0212 20:24:55.201366 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:55.814540 kubelet[1390]: E0212 20:24:55.814491 1390 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.79\" not found" node="10.0.0.79" Feb 12 20:24:55.914653 kubelet[1390]: I0212 20:24:55.914616 1390 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.79" Feb 12 20:24:56.006896 kubelet[1390]: I0212 20:24:56.006854 1390 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.79" Feb 12 20:24:56.018559 kubelet[1390]: E0212 20:24:56.018511 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.119693 kubelet[1390]: E0212 20:24:56.119562 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.202328 kubelet[1390]: E0212 20:24:56.202253 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:56.220022 kubelet[1390]: E0212 20:24:56.219980 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.321137 kubelet[1390]: E0212 20:24:56.321047 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.384639 sudo[1211]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:56.386383 sshd[1207]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.388929 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:35952.service: Deactivated successfully. Feb 12 20:24:56.389655 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:24:56.390202 systemd-logind[1107]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:24:56.390894 systemd-logind[1107]: Removed session 5. Feb 12 20:24:56.421593 kubelet[1390]: E0212 20:24:56.421536 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.522440 kubelet[1390]: E0212 20:24:56.522358 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.623629 kubelet[1390]: E0212 20:24:56.623547 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.724090 kubelet[1390]: E0212 20:24:56.723895 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.824748 kubelet[1390]: E0212 20:24:56.824687 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:56.925574 kubelet[1390]: E0212 20:24:56.925530 1390 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.79\" not found" Feb 12 20:24:57.026938 kubelet[1390]: I0212 20:24:57.026825 1390 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:24:57.027156 env[1123]: time="2024-02-12T20:24:57.027114182Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:24:57.027520 kubelet[1390]: I0212 20:24:57.027338 1390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:24:57.202486 kubelet[1390]: E0212 20:24:57.202415 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:57.202486 kubelet[1390]: I0212 20:24:57.202426 1390 apiserver.go:52] "Watching apiserver" Feb 12 20:24:57.205491 kubelet[1390]: I0212 20:24:57.205448 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:57.205561 kubelet[1390]: I0212 20:24:57.205540 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:57.210519 systemd[1]: Created slice kubepods-burstable-podf56760cc_fb61_46d9_b17e_54cdff3ecd3c.slice. Feb 12 20:24:57.232636 systemd[1]: Created slice kubepods-besteffort-podcb43f0c5_29f0_4b70_a391_3fa0791aad55.slice. Feb 12 20:24:57.300981 kubelet[1390]: I0212 20:24:57.300875 1390 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:24:57.387528 kubelet[1390]: I0212 20:24:57.387456 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-kernel\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387699 kubelet[1390]: I0212 20:24:57.387548 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-net\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387699 kubelet[1390]: I0212 20:24:57.387613 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cni-path\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387699 kubelet[1390]: I0212 20:24:57.387653 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-etc-cni-netd\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387837 kubelet[1390]: I0212 20:24:57.387714 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-lib-modules\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387837 kubelet[1390]: I0212 20:24:57.387747 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-clustermesh-secrets\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387837 kubelet[1390]: I0212 20:24:57.387773 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hubble-tls\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.387837 kubelet[1390]: I0212 20:24:57.387836 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v86t\" (UniqueName: \"kubernetes.io/projected/cb43f0c5-29f0-4b70-a391-3fa0791aad55-kube-api-access-4v86t\") pod \"kube-proxy-s7sdg\" (UID: \"cb43f0c5-29f0-4b70-a391-3fa0791aad55\") " pod="kube-system/kube-proxy-s7sdg" Feb 12 20:24:57.387989 kubelet[1390]: I0212 20:24:57.387969 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hostproc\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388025 kubelet[1390]: I0212 20:24:57.388010 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-xtables-lock\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388057 kubelet[1390]: I0212 20:24:57.388038 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-config-path\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388090 kubelet[1390]: I0212 20:24:57.388065 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb43f0c5-29f0-4b70-a391-3fa0791aad55-kube-proxy\") pod \"kube-proxy-s7sdg\" (UID: \"cb43f0c5-29f0-4b70-a391-3fa0791aad55\") " pod="kube-system/kube-proxy-s7sdg" Feb 12 20:24:57.388127 kubelet[1390]: I0212 20:24:57.388091 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb43f0c5-29f0-4b70-a391-3fa0791aad55-xtables-lock\") pod \"kube-proxy-s7sdg\" (UID: \"cb43f0c5-29f0-4b70-a391-3fa0791aad55\") " pod="kube-system/kube-proxy-s7sdg" Feb 12 20:24:57.388127 kubelet[1390]: I0212 20:24:57.388117 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-run\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388193 kubelet[1390]: I0212 20:24:57.388159 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-cgroup\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388245 kubelet[1390]: I0212 20:24:57.388224 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smv6s\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-kube-api-access-smv6s\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388307 kubelet[1390]: I0212 20:24:57.388292 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb43f0c5-29f0-4b70-a391-3fa0791aad55-lib-modules\") pod \"kube-proxy-s7sdg\" (UID: \"cb43f0c5-29f0-4b70-a391-3fa0791aad55\") " pod="kube-system/kube-proxy-s7sdg" Feb 12 20:24:57.388339 kubelet[1390]: I0212 20:24:57.388331 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-bpf-maps\") pod \"cilium-jrrbr\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " pod="kube-system/cilium-jrrbr" Feb 12 20:24:57.388369 kubelet[1390]: I0212 20:24:57.388357 1390 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:24:58.202806 kubelet[1390]: E0212 20:24:58.202752 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:58.402593 kubelet[1390]: I0212 20:24:58.402541 1390 request.go:690] Waited for 1.196530582s due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.70:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 12 20:24:58.745167 kubelet[1390]: E0212 20:24:58.745113 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:58.745879 env[1123]: time="2024-02-12T20:24:58.745823620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7sdg,Uid:cb43f0c5-29f0-4b70-a391-3fa0791aad55,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:59.031044 kubelet[1390]: E0212 20:24:59.030927 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:59.031497 env[1123]: time="2024-02-12T20:24:59.031447836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrrbr,Uid:f56760cc-fb61-46d9-b17e-54cdff3ecd3c,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:59.203564 kubelet[1390]: E0212 20:24:59.203525 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:59.350501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947190227.mount: Deactivated successfully. Feb 12 20:24:59.358003 env[1123]: time="2024-02-12T20:24:59.357960758Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.361434 env[1123]: time="2024-02-12T20:24:59.361389102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.362689 env[1123]: time="2024-02-12T20:24:59.362649720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.364546 env[1123]: time="2024-02-12T20:24:59.364496980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.365749 env[1123]: time="2024-02-12T20:24:59.365729700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.366892 env[1123]: time="2024-02-12T20:24:59.366870182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.369034 env[1123]: time="2024-02-12T20:24:59.369009177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.369916 env[1123]: time="2024-02-12T20:24:59.369890233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:59.385576 env[1123]: time="2024-02-12T20:24:59.385514510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:59.385576 env[1123]: time="2024-02-12T20:24:59.385558437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:59.385752 env[1123]: time="2024-02-12T20:24:59.385575551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:59.385885 env[1123]: time="2024-02-12T20:24:59.385848046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93bfa7957892df278785a4b011db60a8a63f3de908a8302e14486528f4d22dfc pid=1484 runtime=io.containerd.runc.v2 Feb 12 20:24:59.388957 env[1123]: time="2024-02-12T20:24:59.388805462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:59.388957 env[1123]: time="2024-02-12T20:24:59.388837703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:59.388957 env[1123]: time="2024-02-12T20:24:59.388848968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:59.389081 env[1123]: time="2024-02-12T20:24:59.388983049Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f pid=1497 runtime=io.containerd.runc.v2 Feb 12 20:24:59.396692 systemd[1]: Started cri-containerd-93bfa7957892df278785a4b011db60a8a63f3de908a8302e14486528f4d22dfc.scope. Feb 12 20:24:59.404919 systemd[1]: Started cri-containerd-145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f.scope. Feb 12 20:24:59.421676 env[1123]: time="2024-02-12T20:24:59.421619028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7sdg,Uid:cb43f0c5-29f0-4b70-a391-3fa0791aad55,Namespace:kube-system,Attempt:0,} returns sandbox id \"93bfa7957892df278785a4b011db60a8a63f3de908a8302e14486528f4d22dfc\"" Feb 12 20:24:59.422586 kubelet[1390]: E0212 20:24:59.422556 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:59.423793 env[1123]: time="2024-02-12T20:24:59.423752486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:24:59.425471 env[1123]: time="2024-02-12T20:24:59.425430795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrrbr,Uid:f56760cc-fb61-46d9-b17e-54cdff3ecd3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\"" Feb 12 20:24:59.426244 kubelet[1390]: E0212 20:24:59.426228 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:00.204631 kubelet[1390]: E0212 20:25:00.204582 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:00.523199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741433404.mount: Deactivated successfully. Feb 12 20:25:01.068078 env[1123]: time="2024-02-12T20:25:01.068016394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:01.069936 env[1123]: time="2024-02-12T20:25:01.069906989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:01.071299 env[1123]: time="2024-02-12T20:25:01.071256819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:01.072753 env[1123]: time="2024-02-12T20:25:01.072728022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:01.073217 env[1123]: time="2024-02-12T20:25:01.073186105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:25:01.074087 env[1123]: time="2024-02-12T20:25:01.074065536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:25:01.074872 env[1123]: time="2024-02-12T20:25:01.074839356Z" level=info msg="CreateContainer within sandbox \"93bfa7957892df278785a4b011db60a8a63f3de908a8302e14486528f4d22dfc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:25:01.090388 env[1123]: time="2024-02-12T20:25:01.090307825Z" level=info msg="CreateContainer within sandbox \"93bfa7957892df278785a4b011db60a8a63f3de908a8302e14486528f4d22dfc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e334b0ca21859b151bc0e8161b344184e0e6d966b40d83c76033467e8afab280\"" Feb 12 20:25:01.090790 env[1123]: time="2024-02-12T20:25:01.090765938Z" level=info msg="StartContainer for \"e334b0ca21859b151bc0e8161b344184e0e6d966b40d83c76033467e8afab280\"" Feb 12 20:25:01.105756 systemd[1]: run-containerd-runc-k8s.io-e334b0ca21859b151bc0e8161b344184e0e6d966b40d83c76033467e8afab280-runc.gRwUGh.mount: Deactivated successfully. Feb 12 20:25:01.108709 systemd[1]: Started cri-containerd-e334b0ca21859b151bc0e8161b344184e0e6d966b40d83c76033467e8afab280.scope. Feb 12 20:25:01.132952 env[1123]: time="2024-02-12T20:25:01.132903597Z" level=info msg="StartContainer for \"e334b0ca21859b151bc0e8161b344184e0e6d966b40d83c76033467e8afab280\" returns successfully" Feb 12 20:25:01.205764 kubelet[1390]: E0212 20:25:01.205732 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:01.350200 kubelet[1390]: E0212 20:25:01.350108 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:01.356755 kubelet[1390]: I0212 20:25:01.356725 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s7sdg" podStartSLOduration=-9.223372031498087e+09 pod.CreationTimestamp="2024-02-12 20:24:56 +0000 UTC" firstStartedPulling="2024-02-12 20:24:59.423341647 +0000 UTC m=+16.551315812" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:01.35666466 +0000 UTC m=+18.484638826" watchObservedRunningTime="2024-02-12 20:25:01.356688709 +0000 UTC m=+18.484662874" Feb 12 20:25:02.206479 kubelet[1390]: E0212 20:25:02.206421 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:02.351643 kubelet[1390]: E0212 20:25:02.351608 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:03.195702 kubelet[1390]: E0212 20:25:03.195648 1390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:03.206805 kubelet[1390]: E0212 20:25:03.206783 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:04.207845 kubelet[1390]: E0212 20:25:04.207801 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:05.208450 kubelet[1390]: E0212 20:25:05.208411 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:06.125569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556523138.mount: Deactivated successfully. Feb 12 20:25:06.208768 kubelet[1390]: E0212 20:25:06.208719 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:07.209466 kubelet[1390]: E0212 20:25:07.209428 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:08.209862 kubelet[1390]: E0212 20:25:08.209823 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:09.210123 kubelet[1390]: E0212 20:25:09.210068 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:10.179014 env[1123]: time="2024-02-12T20:25:10.178958768Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:10.180793 env[1123]: time="2024-02-12T20:25:10.180734977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:10.182461 env[1123]: time="2024-02-12T20:25:10.182433331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:10.182929 env[1123]: time="2024-02-12T20:25:10.182896934Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:25:10.186892 env[1123]: time="2024-02-12T20:25:10.186856535Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:10.200987 env[1123]: time="2024-02-12T20:25:10.200947538Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\"" Feb 12 20:25:10.201426 env[1123]: time="2024-02-12T20:25:10.201395320Z" level=info msg="StartContainer for \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\"" Feb 12 20:25:10.210789 kubelet[1390]: E0212 20:25:10.210740 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:10.215720 systemd[1]: Started cri-containerd-0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5.scope. Feb 12 20:25:10.238514 env[1123]: time="2024-02-12T20:25:10.238450777Z" level=info msg="StartContainer for \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\" returns successfully" Feb 12 20:25:10.246531 systemd[1]: cri-containerd-0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5.scope: Deactivated successfully. Feb 12 20:25:10.362078 kubelet[1390]: E0212 20:25:10.362052 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:10.585302 env[1123]: time="2024-02-12T20:25:10.585197964Z" level=info msg="shim disconnected" id=0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5 Feb 12 20:25:10.585302 env[1123]: time="2024-02-12T20:25:10.585236215Z" level=warning msg="cleaning up after shim disconnected" id=0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5 namespace=k8s.io Feb 12 20:25:10.585302 env[1123]: time="2024-02-12T20:25:10.585244737Z" level=info msg="cleaning up dead shim" Feb 12 20:25:10.590568 env[1123]: time="2024-02-12T20:25:10.590529276Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1751 runtime=io.containerd.runc.v2\n" Feb 12 20:25:11.195979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5-rootfs.mount: Deactivated successfully. Feb 12 20:25:11.211352 kubelet[1390]: E0212 20:25:11.211319 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:11.364489 kubelet[1390]: E0212 20:25:11.364455 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:11.366088 env[1123]: time="2024-02-12T20:25:11.366049403Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:25:11.386998 env[1123]: time="2024-02-12T20:25:11.386955884Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\"" Feb 12 20:25:11.387462 env[1123]: time="2024-02-12T20:25:11.387433745Z" level=info msg="StartContainer for \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\"" Feb 12 20:25:11.402596 systemd[1]: Started cri-containerd-d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac.scope. Feb 12 20:25:11.423189 env[1123]: time="2024-02-12T20:25:11.423139353Z" level=info msg="StartContainer for \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\" returns successfully" Feb 12 20:25:11.432771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:25:11.433079 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:25:11.433339 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:25:11.435294 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:11.437208 systemd[1]: cri-containerd-d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac.scope: Deactivated successfully. Feb 12 20:25:11.443578 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:11.458922 env[1123]: time="2024-02-12T20:25:11.458794521Z" level=info msg="shim disconnected" id=d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac Feb 12 20:25:11.458922 env[1123]: time="2024-02-12T20:25:11.458858175Z" level=warning msg="cleaning up after shim disconnected" id=d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac namespace=k8s.io Feb 12 20:25:11.458922 env[1123]: time="2024-02-12T20:25:11.458871028Z" level=info msg="cleaning up dead shim" Feb 12 20:25:11.466070 env[1123]: time="2024-02-12T20:25:11.466016116Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1813 runtime=io.containerd.runc.v2\n" Feb 12 20:25:12.195494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac-rootfs.mount: Deactivated successfully. Feb 12 20:25:12.211777 kubelet[1390]: E0212 20:25:12.211748 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:12.367125 kubelet[1390]: E0212 20:25:12.367106 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:12.368444 env[1123]: time="2024-02-12T20:25:12.368402041Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:25:12.655404 env[1123]: time="2024-02-12T20:25:12.655354831Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\"" Feb 12 20:25:12.655727 env[1123]: time="2024-02-12T20:25:12.655700941Z" level=info msg="StartContainer for \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\"" Feb 12 20:25:12.670230 systemd[1]: Started cri-containerd-cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5.scope. Feb 12 20:25:12.692259 env[1123]: time="2024-02-12T20:25:12.692217731Z" level=info msg="StartContainer for \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\" returns successfully" Feb 12 20:25:12.693039 systemd[1]: cri-containerd-cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5.scope: Deactivated successfully. Feb 12 20:25:12.711321 env[1123]: time="2024-02-12T20:25:12.711241270Z" level=info msg="shim disconnected" id=cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5 Feb 12 20:25:12.711321 env[1123]: time="2024-02-12T20:25:12.711305273Z" level=warning msg="cleaning up after shim disconnected" id=cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5 namespace=k8s.io Feb 12 20:25:12.711321 env[1123]: time="2024-02-12T20:25:12.711317213Z" level=info msg="cleaning up dead shim" Feb 12 20:25:12.717970 env[1123]: time="2024-02-12T20:25:12.717912884Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1872 runtime=io.containerd.runc.v2\n" Feb 12 20:25:13.195869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5-rootfs.mount: Deactivated successfully. Feb 12 20:25:13.212296 kubelet[1390]: E0212 20:25:13.212259 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:13.370150 kubelet[1390]: E0212 20:25:13.370130 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:13.371653 env[1123]: time="2024-02-12T20:25:13.371612135Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:25:13.386365 env[1123]: time="2024-02-12T20:25:13.386330800Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\"" Feb 12 20:25:13.386692 env[1123]: time="2024-02-12T20:25:13.386655955Z" level=info msg="StartContainer for \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\"" Feb 12 20:25:13.400258 systemd[1]: Started cri-containerd-9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12.scope. Feb 12 20:25:13.418372 systemd[1]: cri-containerd-9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12.scope: Deactivated successfully. Feb 12 20:25:13.419471 env[1123]: time="2024-02-12T20:25:13.419435403Z" level=info msg="StartContainer for \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\" returns successfully" Feb 12 20:25:13.436294 env[1123]: time="2024-02-12T20:25:13.436227786Z" level=info msg="shim disconnected" id=9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12 Feb 12 20:25:13.436294 env[1123]: time="2024-02-12T20:25:13.436270824Z" level=warning msg="cleaning up after shim disconnected" id=9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12 namespace=k8s.io Feb 12 20:25:13.436294 env[1123]: time="2024-02-12T20:25:13.436291014Z" level=info msg="cleaning up dead shim" Feb 12 20:25:13.441656 env[1123]: time="2024-02-12T20:25:13.441608102Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1927 runtime=io.containerd.runc.v2\n" Feb 12 20:25:14.195728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12-rootfs.mount: Deactivated successfully. Feb 12 20:25:14.212703 kubelet[1390]: E0212 20:25:14.212679 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:14.373266 kubelet[1390]: E0212 20:25:14.373231 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:14.375066 env[1123]: time="2024-02-12T20:25:14.375029901Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:25:14.389519 env[1123]: time="2024-02-12T20:25:14.389477682Z" level=info msg="CreateContainer within sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\"" Feb 12 20:25:14.389889 env[1123]: time="2024-02-12T20:25:14.389853512Z" level=info msg="StartContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\"" Feb 12 20:25:14.403950 systemd[1]: Started cri-containerd-631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54.scope. Feb 12 20:25:14.429521 env[1123]: time="2024-02-12T20:25:14.429465671Z" level=info msg="StartContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" returns successfully" Feb 12 20:25:14.487242 kubelet[1390]: I0212 20:25:14.487048 1390 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:25:14.703305 kernel: Initializing XFRM netlink socket Feb 12 20:25:15.196131 systemd[1]: run-containerd-runc-k8s.io-631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54-runc.2q5b02.mount: Deactivated successfully. Feb 12 20:25:15.212961 kubelet[1390]: E0212 20:25:15.212927 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:15.377875 kubelet[1390]: E0212 20:25:15.377824 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:16.213068 kubelet[1390]: E0212 20:25:16.213016 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:16.334000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:25:16.325449 systemd-networkd[1027]: cilium_host: Link UP Feb 12 20:25:16.325617 systemd-networkd[1027]: cilium_net: Link UP Feb 12 20:25:16.325621 systemd-networkd[1027]: cilium_net: Gained carrier Feb 12 20:25:16.325801 systemd-networkd[1027]: cilium_host: Gained carrier Feb 12 20:25:16.326805 systemd-networkd[1027]: cilium_host: Gained IPv6LL Feb 12 20:25:16.379172 kubelet[1390]: E0212 20:25:16.379139 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:16.392990 systemd-networkd[1027]: cilium_vxlan: Link UP Feb 12 20:25:16.392998 systemd-networkd[1027]: cilium_vxlan: Gained carrier Feb 12 20:25:16.583317 kernel: NET: Registered PF_ALG protocol family Feb 12 20:25:16.662524 systemd-networkd[1027]: cilium_net: Gained IPv6LL Feb 12 20:25:17.083195 systemd-networkd[1027]: lxc_health: Link UP Feb 12 20:25:17.094428 systemd-networkd[1027]: lxc_health: Gained carrier Feb 12 20:25:17.095294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:25:17.161325 update_engine[1110]: I0212 20:25:17.161254 1110 update_attempter.cc:509] Updating boot flags... Feb 12 20:25:17.213872 kubelet[1390]: E0212 20:25:17.213836 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:17.380183 kubelet[1390]: E0212 20:25:17.380111 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:17.839415 systemd-networkd[1027]: cilium_vxlan: Gained IPv6LL Feb 12 20:25:17.974301 kubelet[1390]: I0212 20:25:17.974254 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jrrbr" podStartSLOduration=-9.22337201488056e+09 pod.CreationTimestamp="2024-02-12 20:24:56 +0000 UTC" firstStartedPulling="2024-02-12 20:24:59.426676687 +0000 UTC m=+16.554650852" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:15.402708312 +0000 UTC m=+32.530682487" watchObservedRunningTime="2024-02-12 20:25:17.974215935 +0000 UTC m=+35.102190100" Feb 12 20:25:17.974503 kubelet[1390]: I0212 20:25:17.974490 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:17.978941 systemd[1]: Created slice kubepods-besteffort-pod1b4d33b4_3364_42d0_8cef_e562c7b040d6.slice. Feb 12 20:25:18.106800 kubelet[1390]: I0212 20:25:18.106693 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx865\" (UniqueName: \"kubernetes.io/projected/1b4d33b4-3364-42d0-8cef-e562c7b040d6-kube-api-access-nx865\") pod \"nginx-deployment-8ffc5cf85-s7cc8\" (UID: \"1b4d33b4-3364-42d0-8cef-e562c7b040d6\") " pod="default/nginx-deployment-8ffc5cf85-s7cc8" Feb 12 20:25:18.214346 kubelet[1390]: E0212 20:25:18.214274 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:18.281951 env[1123]: time="2024-02-12T20:25:18.281902243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-s7cc8,Uid:1b4d33b4-3364-42d0-8cef-e562c7b040d6,Namespace:default,Attempt:0,}" Feb 12 20:25:18.315180 systemd-networkd[1027]: lxc7ff6cadfd9df: Link UP Feb 12 20:25:18.322316 kernel: eth0: renamed from tmp895db Feb 12 20:25:18.332305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:18.332378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7ff6cadfd9df: link becomes ready Feb 12 20:25:18.332394 systemd-networkd[1027]: lxc7ff6cadfd9df: Gained carrier Feb 12 20:25:18.382438 kubelet[1390]: E0212 20:25:18.382100 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:19.054474 systemd-networkd[1027]: lxc_health: Gained IPv6LL Feb 12 20:25:19.214982 kubelet[1390]: E0212 20:25:19.214934 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:19.383111 kubelet[1390]: E0212 20:25:19.383092 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:19.950621 systemd-networkd[1027]: lxc7ff6cadfd9df: Gained IPv6LL Feb 12 20:25:20.216137 kubelet[1390]: E0212 20:25:20.216019 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:20.383730 kubelet[1390]: E0212 20:25:20.383709 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:21.216921 kubelet[1390]: E0212 20:25:21.216885 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:21.289502 env[1123]: time="2024-02-12T20:25:21.289409823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:21.289502 env[1123]: time="2024-02-12T20:25:21.289454706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:21.289502 env[1123]: time="2024-02-12T20:25:21.289475705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:21.289849 env[1123]: time="2024-02-12T20:25:21.289641256Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/895db18282f584c521584c2ed771aa2747c5473151755af13ba25b366a95bcc7 pid=2471 runtime=io.containerd.runc.v2 Feb 12 20:25:21.303459 systemd[1]: Started cri-containerd-895db18282f584c521584c2ed771aa2747c5473151755af13ba25b366a95bcc7.scope. Feb 12 20:25:21.312427 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:25:21.333312 env[1123]: time="2024-02-12T20:25:21.332161841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-s7cc8,Uid:1b4d33b4-3364-42d0-8cef-e562c7b040d6,Namespace:default,Attempt:0,} returns sandbox id \"895db18282f584c521584c2ed771aa2747c5473151755af13ba25b366a95bcc7\"" Feb 12 20:25:21.333741 env[1123]: time="2024-02-12T20:25:21.333700528Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:25:22.217755 kubelet[1390]: E0212 20:25:22.217696 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:23.195731 kubelet[1390]: E0212 20:25:23.195674 1390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:23.218871 kubelet[1390]: E0212 20:25:23.218827 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:24.219588 kubelet[1390]: E0212 20:25:24.219549 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:24.347071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404932081.mount: Deactivated successfully. Feb 12 20:25:25.220029 kubelet[1390]: E0212 20:25:25.219982 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:25.444154 env[1123]: time="2024-02-12T20:25:25.444098963Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:25.445828 env[1123]: time="2024-02-12T20:25:25.445794984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:25.447235 env[1123]: time="2024-02-12T20:25:25.447215861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:25.448519 env[1123]: time="2024-02-12T20:25:25.448463631Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:25.449135 env[1123]: time="2024-02-12T20:25:25.449100641Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:25:25.450501 env[1123]: time="2024-02-12T20:25:25.450477009Z" level=info msg="CreateContainer within sandbox \"895db18282f584c521584c2ed771aa2747c5473151755af13ba25b366a95bcc7\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:25:25.462198 env[1123]: time="2024-02-12T20:25:25.462159777Z" level=info msg="CreateContainer within sandbox \"895db18282f584c521584c2ed771aa2747c5473151755af13ba25b366a95bcc7\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d4a0e70c6cf0e7b3862c7ba0663023da3e222d25a3017c42a547cb50e7dfb57b\"" Feb 12 20:25:25.462513 env[1123]: time="2024-02-12T20:25:25.462491146Z" level=info msg="StartContainer for \"d4a0e70c6cf0e7b3862c7ba0663023da3e222d25a3017c42a547cb50e7dfb57b\"" Feb 12 20:25:25.477422 systemd[1]: Started cri-containerd-d4a0e70c6cf0e7b3862c7ba0663023da3e222d25a3017c42a547cb50e7dfb57b.scope. Feb 12 20:25:25.495955 env[1123]: time="2024-02-12T20:25:25.495915331Z" level=info msg="StartContainer for \"d4a0e70c6cf0e7b3862c7ba0663023da3e222d25a3017c42a547cb50e7dfb57b\" returns successfully" Feb 12 20:25:26.221078 kubelet[1390]: E0212 20:25:26.221015 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:26.404459 kubelet[1390]: I0212 20:25:26.404212 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-s7cc8" podStartSLOduration=-9.223372027450663e+09 pod.CreationTimestamp="2024-02-12 20:25:17 +0000 UTC" firstStartedPulling="2024-02-12 20:25:21.333223221 +0000 UTC m=+38.461197386" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:26.403471325 +0000 UTC m=+43.531445490" watchObservedRunningTime="2024-02-12 20:25:26.404112626 +0000 UTC m=+43.532086791" Feb 12 20:25:27.221914 kubelet[1390]: E0212 20:25:27.221851 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:28.222019 kubelet[1390]: E0212 20:25:28.221983 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:29.222346 kubelet[1390]: E0212 20:25:29.222293 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:29.612734 kubelet[1390]: I0212 20:25:29.612689 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:29.620439 systemd[1]: Created slice kubepods-besteffort-pod9ce75532_eef5_4640_8801_981114f93f05.slice. Feb 12 20:25:29.766128 kubelet[1390]: I0212 20:25:29.766095 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkch\" (UniqueName: \"kubernetes.io/projected/9ce75532-eef5-4640-8801-981114f93f05-kube-api-access-lpkch\") pod \"nfs-server-provisioner-0\" (UID: \"9ce75532-eef5-4640-8801-981114f93f05\") " pod="default/nfs-server-provisioner-0" Feb 12 20:25:29.766128 kubelet[1390]: I0212 20:25:29.766140 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9ce75532-eef5-4640-8801-981114f93f05-data\") pod \"nfs-server-provisioner-0\" (UID: \"9ce75532-eef5-4640-8801-981114f93f05\") " pod="default/nfs-server-provisioner-0" Feb 12 20:25:29.924193 env[1123]: time="2024-02-12T20:25:29.924077492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9ce75532-eef5-4640-8801-981114f93f05,Namespace:default,Attempt:0,}" Feb 12 20:25:30.222883 kubelet[1390]: E0212 20:25:30.222741 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:30.378506 systemd-networkd[1027]: lxc9ddca7e6dfe6: Link UP Feb 12 20:25:30.385301 kernel: eth0: renamed from tmp80423 Feb 12 20:25:30.394984 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:30.395078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ddca7e6dfe6: link becomes ready Feb 12 20:25:30.395212 systemd-networkd[1027]: lxc9ddca7e6dfe6: Gained carrier Feb 12 20:25:30.620817 env[1123]: time="2024-02-12T20:25:30.620731846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:30.620817 env[1123]: time="2024-02-12T20:25:30.620769097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:30.620817 env[1123]: time="2024-02-12T20:25:30.620779429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:30.621077 env[1123]: time="2024-02-12T20:25:30.620945598Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8042318248352b47cd5efabe4906168ea82654b818437657f907a906576df08f pid=2651 runtime=io.containerd.runc.v2 Feb 12 20:25:30.633877 systemd[1]: Started cri-containerd-8042318248352b47cd5efabe4906168ea82654b818437657f907a906576df08f.scope. Feb 12 20:25:30.645717 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:25:30.665209 env[1123]: time="2024-02-12T20:25:30.665175687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9ce75532-eef5-4640-8801-981114f93f05,Namespace:default,Attempt:0,} returns sandbox id \"8042318248352b47cd5efabe4906168ea82654b818437657f907a906576df08f\"" Feb 12 20:25:30.666722 env[1123]: time="2024-02-12T20:25:30.666690464Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:25:31.223507 kubelet[1390]: E0212 20:25:31.223465 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:31.603946 systemd-networkd[1027]: lxc9ddca7e6dfe6: Gained IPv6LL Feb 12 20:25:32.224655 kubelet[1390]: E0212 20:25:32.224596 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.225520 kubelet[1390]: E0212 20:25:33.225459 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.515070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772912792.mount: Deactivated successfully. Feb 12 20:25:34.226300 kubelet[1390]: E0212 20:25:34.226221 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:35.227419 kubelet[1390]: E0212 20:25:35.227365 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:35.597246 env[1123]: time="2024-02-12T20:25:35.597180022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:35.598930 env[1123]: time="2024-02-12T20:25:35.598885971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:35.600379 env[1123]: time="2024-02-12T20:25:35.600339176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:35.601916 env[1123]: time="2024-02-12T20:25:35.601888956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:35.602571 env[1123]: time="2024-02-12T20:25:35.602544261Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 20:25:35.604209 env[1123]: time="2024-02-12T20:25:35.604176565Z" level=info msg="CreateContainer within sandbox \"8042318248352b47cd5efabe4906168ea82654b818437657f907a906576df08f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:25:35.613439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074988187.mount: Deactivated successfully. Feb 12 20:25:35.618233 env[1123]: time="2024-02-12T20:25:35.618179772Z" level=info msg="CreateContainer within sandbox \"8042318248352b47cd5efabe4906168ea82654b818437657f907a906576df08f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0d2a61b4aea7c9d2b13df1d6c8dd5b9345ed7e25969f40361d27ab677900178a\"" Feb 12 20:25:35.618800 env[1123]: time="2024-02-12T20:25:35.618759238Z" level=info msg="StartContainer for \"0d2a61b4aea7c9d2b13df1d6c8dd5b9345ed7e25969f40361d27ab677900178a\"" Feb 12 20:25:35.638621 systemd[1]: run-containerd-runc-k8s.io-0d2a61b4aea7c9d2b13df1d6c8dd5b9345ed7e25969f40361d27ab677900178a-runc.nS2Hqn.mount: Deactivated successfully. Feb 12 20:25:35.640379 systemd[1]: Started cri-containerd-0d2a61b4aea7c9d2b13df1d6c8dd5b9345ed7e25969f40361d27ab677900178a.scope. Feb 12 20:25:35.693954 env[1123]: time="2024-02-12T20:25:35.693869140Z" level=info msg="StartContainer for \"0d2a61b4aea7c9d2b13df1d6c8dd5b9345ed7e25969f40361d27ab677900178a\" returns successfully" Feb 12 20:25:36.228387 kubelet[1390]: E0212 20:25:36.228314 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:36.424774 kubelet[1390]: I0212 20:25:36.424735 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.22337202943008e+09 pod.CreationTimestamp="2024-02-12 20:25:29 +0000 UTC" firstStartedPulling="2024-02-12 20:25:30.666446827 +0000 UTC m=+47.794420992" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:36.424384626 +0000 UTC m=+53.552358801" watchObservedRunningTime="2024-02-12 20:25:36.424694739 +0000 UTC m=+53.552668914" Feb 12 20:25:37.229121 kubelet[1390]: E0212 20:25:37.229075 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:38.229372 kubelet[1390]: E0212 20:25:38.229314 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:39.230001 kubelet[1390]: E0212 20:25:39.229931 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:40.230874 kubelet[1390]: E0212 20:25:40.230826 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:41.231916 kubelet[1390]: E0212 20:25:41.231879 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:42.232901 kubelet[1390]: E0212 20:25:42.232832 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:43.196065 kubelet[1390]: E0212 20:25:43.196007 1390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:43.233937 kubelet[1390]: E0212 20:25:43.233885 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:44.234578 kubelet[1390]: E0212 20:25:44.234515 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:45.235561 kubelet[1390]: E0212 20:25:45.235503 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:45.760248 kubelet[1390]: I0212 20:25:45.760210 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:45.764141 systemd[1]: Created slice kubepods-besteffort-pod36128332_904e_4684_8625_5e20e4c1aa51.slice. Feb 12 20:25:45.854130 kubelet[1390]: I0212 20:25:45.854086 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcmw\" (UniqueName: \"kubernetes.io/projected/36128332-904e-4684-8625-5e20e4c1aa51-kube-api-access-8kcmw\") pod \"test-pod-1\" (UID: \"36128332-904e-4684-8625-5e20e4c1aa51\") " pod="default/test-pod-1" Feb 12 20:25:45.854130 kubelet[1390]: I0212 20:25:45.854141 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65779fe9-5327-4eb1-9037-3909164812af\" (UniqueName: \"kubernetes.io/nfs/36128332-904e-4684-8625-5e20e4c1aa51-pvc-65779fe9-5327-4eb1-9037-3909164812af\") pod \"test-pod-1\" (UID: \"36128332-904e-4684-8625-5e20e4c1aa51\") " pod="default/test-pod-1" Feb 12 20:25:45.975408 kernel: FS-Cache: Loaded Feb 12 20:25:46.009763 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:25:46.009905 kernel: RPC: Registered udp transport module. Feb 12 20:25:46.009932 kernel: RPC: Registered tcp transport module. Feb 12 20:25:46.009951 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:25:46.047312 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:25:46.219719 kernel: NFS: Registering the id_resolver key type Feb 12 20:25:46.220208 kernel: Key type id_resolver registered Feb 12 20:25:46.220358 kernel: Key type id_legacy registered Feb 12 20:25:46.235765 kubelet[1390]: E0212 20:25:46.235721 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:46.240060 nfsidmap[2801]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:25:46.242587 nfsidmap[2804]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:25:46.366633 env[1123]: time="2024-02-12T20:25:46.366584651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:36128332-904e-4684-8625-5e20e4c1aa51,Namespace:default,Attempt:0,}" Feb 12 20:25:46.509563 systemd-networkd[1027]: lxc719e845fd447: Link UP Feb 12 20:25:46.511311 kernel: eth0: renamed from tmp33ed0 Feb 12 20:25:46.518164 systemd-networkd[1027]: lxc719e845fd447: Gained carrier Feb 12 20:25:46.518299 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:46.518338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc719e845fd447: link becomes ready Feb 12 20:25:47.015232 env[1123]: time="2024-02-12T20:25:47.014884062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:47.015232 env[1123]: time="2024-02-12T20:25:47.014924525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:47.015232 env[1123]: time="2024-02-12T20:25:47.014934926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:47.015232 env[1123]: time="2024-02-12T20:25:47.015132090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33ed0dc74a262b97593ad7095e4513a2e2272b2798d6a2ccd32b64c756549f2d pid=2838 runtime=io.containerd.runc.v2 Feb 12 20:25:47.028396 systemd[1]: Started cri-containerd-33ed0dc74a262b97593ad7095e4513a2e2272b2798d6a2ccd32b64c756549f2d.scope. Feb 12 20:25:47.036976 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:25:47.055497 env[1123]: time="2024-02-12T20:25:47.055435992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:36128332-904e-4684-8625-5e20e4c1aa51,Namespace:default,Attempt:0,} returns sandbox id \"33ed0dc74a262b97593ad7095e4513a2e2272b2798d6a2ccd32b64c756549f2d\"" Feb 12 20:25:47.057061 env[1123]: time="2024-02-12T20:25:47.057024196Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:25:47.236056 kubelet[1390]: E0212 20:25:47.236009 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:47.726480 systemd-networkd[1027]: lxc719e845fd447: Gained IPv6LL Feb 12 20:25:47.738869 env[1123]: time="2024-02-12T20:25:47.738812160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:47.810934 env[1123]: time="2024-02-12T20:25:47.810833084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:47.854637 env[1123]: time="2024-02-12T20:25:47.854563647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:47.890950 env[1123]: time="2024-02-12T20:25:47.890894651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:47.891622 env[1123]: time="2024-02-12T20:25:47.891596369Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:25:47.893328 env[1123]: time="2024-02-12T20:25:47.893300870Z" level=info msg="CreateContainer within sandbox \"33ed0dc74a262b97593ad7095e4513a2e2272b2798d6a2ccd32b64c756549f2d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:25:48.236645 kubelet[1390]: E0212 20:25:48.236588 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:48.253629 env[1123]: time="2024-02-12T20:25:48.253515719Z" level=info msg="CreateContainer within sandbox \"33ed0dc74a262b97593ad7095e4513a2e2272b2798d6a2ccd32b64c756549f2d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e85dbb2168be87e575def28d9d9b44e381990ef620fb417f4f06d1f7312ae219\"" Feb 12 20:25:48.254163 env[1123]: time="2024-02-12T20:25:48.254126177Z" level=info msg="StartContainer for \"e85dbb2168be87e575def28d9d9b44e381990ef620fb417f4f06d1f7312ae219\"" Feb 12 20:25:48.270191 systemd[1]: Started cri-containerd-e85dbb2168be87e575def28d9d9b44e381990ef620fb417f4f06d1f7312ae219.scope. Feb 12 20:25:48.406711 env[1123]: time="2024-02-12T20:25:48.406638321Z" level=info msg="StartContainer for \"e85dbb2168be87e575def28d9d9b44e381990ef620fb417f4f06d1f7312ae219\" returns successfully" Feb 12 20:25:48.445571 kubelet[1390]: I0212 20:25:48.445532 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017409285e+09 pod.CreationTimestamp="2024-02-12 20:25:29 +0000 UTC" firstStartedPulling="2024-02-12 20:25:47.056579805 +0000 UTC m=+64.184553970" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:48.445401558 +0000 UTC m=+65.573375733" watchObservedRunningTime="2024-02-12 20:25:48.445491101 +0000 UTC m=+65.573465266" Feb 12 20:25:49.237695 kubelet[1390]: E0212 20:25:49.237657 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:50.238431 kubelet[1390]: E0212 20:25:50.238356 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:51.239584 kubelet[1390]: E0212 20:25:51.239487 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.239678 kubelet[1390]: E0212 20:25:52.239610 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.307242 env[1123]: time="2024-02-12T20:25:52.307177606Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:25:52.312132 env[1123]: time="2024-02-12T20:25:52.312103338Z" level=info msg="StopContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" with timeout 1 (s)" Feb 12 20:25:52.312339 env[1123]: time="2024-02-12T20:25:52.312311873Z" level=info msg="Stop container \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" with signal terminated" Feb 12 20:25:52.317087 systemd-networkd[1027]: lxc_health: Link DOWN Feb 12 20:25:52.317094 systemd-networkd[1027]: lxc_health: Lost carrier Feb 12 20:25:52.351643 systemd[1]: cri-containerd-631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54.scope: Deactivated successfully. Feb 12 20:25:52.351946 systemd[1]: cri-containerd-631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54.scope: Consumed 5.971s CPU time. Feb 12 20:25:52.365344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54-rootfs.mount: Deactivated successfully. Feb 12 20:25:52.485462 env[1123]: time="2024-02-12T20:25:52.485412936Z" level=info msg="shim disconnected" id=631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54 Feb 12 20:25:52.485462 env[1123]: time="2024-02-12T20:25:52.485458107Z" level=warning msg="cleaning up after shim disconnected" id=631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54 namespace=k8s.io Feb 12 20:25:52.485462 env[1123]: time="2024-02-12T20:25:52.485466425Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.492004 env[1123]: time="2024-02-12T20:25:52.491896946Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2969 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.569905 env[1123]: time="2024-02-12T20:25:52.569857594Z" level=info msg="StopContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" returns successfully" Feb 12 20:25:52.570495 env[1123]: time="2024-02-12T20:25:52.570463325Z" level=info msg="StopPodSandbox for \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\"" Feb 12 20:25:52.570568 env[1123]: time="2024-02-12T20:25:52.570538708Z" level=info msg="Container to stop \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.570568 env[1123]: time="2024-02-12T20:25:52.570553869Z" level=info msg="Container to stop \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.570568 env[1123]: time="2024-02-12T20:25:52.570564420Z" level=info msg="Container to stop \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.570697 env[1123]: time="2024-02-12T20:25:52.570575132Z" level=info msg="Container to stop \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.570697 env[1123]: time="2024-02-12T20:25:52.570583930Z" level=info msg="Container to stop \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.572039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f-shm.mount: Deactivated successfully. Feb 12 20:25:52.575555 systemd[1]: cri-containerd-145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f.scope: Deactivated successfully. Feb 12 20:25:52.589145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f-rootfs.mount: Deactivated successfully. Feb 12 20:25:52.682361 env[1123]: time="2024-02-12T20:25:52.682304507Z" level=info msg="shim disconnected" id=145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f Feb 12 20:25:52.682581 env[1123]: time="2024-02-12T20:25:52.682555316Z" level=warning msg="cleaning up after shim disconnected" id=145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f namespace=k8s.io Feb 12 20:25:52.682655 env[1123]: time="2024-02-12T20:25:52.682580237Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.689365 env[1123]: time="2024-02-12T20:25:52.689311178Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2999 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.689724 env[1123]: time="2024-02-12T20:25:52.689693116Z" level=info msg="TearDown network for sandbox \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" successfully" Feb 12 20:25:52.689779 env[1123]: time="2024-02-12T20:25:52.689724910Z" level=info msg="StopPodSandbox for \"145eebf3bca1127b4e13f73390ddb56b187fec6a5d00f0595c3ba6d9df17aa4f\" returns successfully" Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793572 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hubble-tls\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793624 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-cgroup\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793645 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hostproc\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793665 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-config-path\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793683 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-kernel\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.793728 kubelet[1390]: I0212 20:25:52.793698 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cni-path\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793714 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-lib-modules\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793728 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-xtables-lock\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793744 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-net\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793762 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-etc-cni-netd\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793780 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-clustermesh-secrets\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794043 kubelet[1390]: I0212 20:25:52.793796 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-run\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794231 kubelet[1390]: I0212 20:25:52.793813 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smv6s\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-kube-api-access-smv6s\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794231 kubelet[1390]: I0212 20:25:52.793827 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-bpf-maps\") pod \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\" (UID: \"f56760cc-fb61-46d9-b17e-54cdff3ecd3c\") " Feb 12 20:25:52.794231 kubelet[1390]: I0212 20:25:52.793891 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794231 kubelet[1390]: I0212 20:25:52.793926 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794231 kubelet[1390]: I0212 20:25:52.793941 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794443 kubelet[1390]: I0212 20:25:52.793955 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794443 kubelet[1390]: I0212 20:25:52.793969 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794443 kubelet[1390]: I0212 20:25:52.793983 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.794443 kubelet[1390]: I0212 20:25:52.793997 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.796432 kubelet[1390]: I0212 20:25:52.794653 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.796432 kubelet[1390]: I0212 20:25:52.794717 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.796432 kubelet[1390]: I0212 20:25:52.794749 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.796432 kubelet[1390]: W0212 20:25:52.795066 1390 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f56760cc-fb61-46d9-b17e-54cdff3ecd3c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:25:52.797450 kubelet[1390]: I0212 20:25:52.797224 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:25:52.797172 systemd[1]: var-lib-kubelet-pods-f56760cc\x2dfb61\x2d46d9\x2db17e\x2d54cdff3ecd3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:25:52.797926 kubelet[1390]: I0212 20:25:52.797897 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:52.798011 kubelet[1390]: I0212 20:25:52.797989 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:52.798451 kubelet[1390]: I0212 20:25:52.798408 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-kube-api-access-smv6s" (OuterVolumeSpecName: "kube-api-access-smv6s") pod "f56760cc-fb61-46d9-b17e-54cdff3ecd3c" (UID: "f56760cc-fb61-46d9-b17e-54cdff3ecd3c"). InnerVolumeSpecName "kube-api-access-smv6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:52.894886 kubelet[1390]: I0212 20:25:52.894841 1390 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-bpf-maps\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.894886 kubelet[1390]: I0212 20:25:52.894883 1390 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-net\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.894886 kubelet[1390]: I0212 20:25:52.894898 1390 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-etc-cni-netd\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894911 1390 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-clustermesh-secrets\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894924 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-run\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894937 1390 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-smv6s\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-kube-api-access-smv6s\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894948 1390 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hubble-tls\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894959 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-cgroup\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894971 1390 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-hostproc\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894982 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cilium-config-path\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895094 kubelet[1390]: I0212 20:25:52.894994 1390 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-host-proc-sys-kernel\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895269 kubelet[1390]: I0212 20:25:52.895004 1390 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-cni-path\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895269 kubelet[1390]: I0212 20:25:52.895015 1390 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-lib-modules\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:52.895269 kubelet[1390]: I0212 20:25:52.895026 1390 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f56760cc-fb61-46d9-b17e-54cdff3ecd3c-xtables-lock\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:53.240729 kubelet[1390]: E0212 20:25:53.240676 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:53.258377 kubelet[1390]: E0212 20:25:53.258342 1390 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:25:53.293465 systemd[1]: var-lib-kubelet-pods-f56760cc\x2dfb61\x2d46d9\x2db17e\x2d54cdff3ecd3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmv6s.mount: Deactivated successfully. Feb 12 20:25:53.293579 systemd[1]: var-lib-kubelet-pods-f56760cc\x2dfb61\x2d46d9\x2db17e\x2d54cdff3ecd3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:53.329141 systemd[1]: Removed slice kubepods-burstable-podf56760cc_fb61_46d9_b17e_54cdff3ecd3c.slice. Feb 12 20:25:53.329221 systemd[1]: kubepods-burstable-podf56760cc_fb61_46d9_b17e_54cdff3ecd3c.slice: Consumed 6.053s CPU time. Feb 12 20:25:53.447225 kubelet[1390]: I0212 20:25:53.447194 1390 scope.go:115] "RemoveContainer" containerID="631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54" Feb 12 20:25:53.448535 env[1123]: time="2024-02-12T20:25:53.448480590Z" level=info msg="RemoveContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\"" Feb 12 20:25:53.507194 env[1123]: time="2024-02-12T20:25:53.506845002Z" level=info msg="RemoveContainer for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" returns successfully" Feb 12 20:25:53.507309 kubelet[1390]: I0212 20:25:53.507132 1390 scope.go:115] "RemoveContainer" containerID="9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12" Feb 12 20:25:53.508214 env[1123]: time="2024-02-12T20:25:53.508165152Z" level=info msg="RemoveContainer for \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\"" Feb 12 20:25:53.577951 env[1123]: time="2024-02-12T20:25:53.577900378Z" level=info msg="RemoveContainer for \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\" returns successfully" Feb 12 20:25:53.578143 kubelet[1390]: I0212 20:25:53.578114 1390 scope.go:115] "RemoveContainer" containerID="cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5" Feb 12 20:25:53.579173 env[1123]: time="2024-02-12T20:25:53.579146889Z" level=info msg="RemoveContainer for \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\"" Feb 12 20:25:53.647576 env[1123]: time="2024-02-12T20:25:53.647513516Z" level=info msg="RemoveContainer for \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\" returns successfully" Feb 12 20:25:53.647810 kubelet[1390]: I0212 20:25:53.647785 1390 scope.go:115] "RemoveContainer" containerID="d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac" Feb 12 20:25:53.648735 env[1123]: time="2024-02-12T20:25:53.648716088Z" level=info msg="RemoveContainer for \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\"" Feb 12 20:25:53.732445 env[1123]: time="2024-02-12T20:25:53.732386562Z" level=info msg="RemoveContainer for \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\" returns successfully" Feb 12 20:25:53.732680 kubelet[1390]: I0212 20:25:53.732630 1390 scope.go:115] "RemoveContainer" containerID="0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5" Feb 12 20:25:53.733629 env[1123]: time="2024-02-12T20:25:53.733602010Z" level=info msg="RemoveContainer for \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\"" Feb 12 20:25:53.827114 env[1123]: time="2024-02-12T20:25:53.826972255Z" level=info msg="RemoveContainer for \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\" returns successfully" Feb 12 20:25:53.827471 kubelet[1390]: I0212 20:25:53.827428 1390 scope.go:115] "RemoveContainer" containerID="631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54" Feb 12 20:25:53.827870 env[1123]: time="2024-02-12T20:25:53.827769103Z" level=error msg="ContainerStatus for \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\": not found" Feb 12 20:25:53.827996 kubelet[1390]: E0212 20:25:53.827978 1390 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\": not found" containerID="631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54" Feb 12 20:25:53.828050 kubelet[1390]: I0212 20:25:53.828013 1390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54} err="failed to get container status \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\": rpc error: code = NotFound desc = an error occurred when try to find container \"631d23756bcba153e9bbaa288c2818150751f20988ca7532661877c56223df54\": not found" Feb 12 20:25:53.828050 kubelet[1390]: I0212 20:25:53.828025 1390 scope.go:115] "RemoveContainer" containerID="9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12" Feb 12 20:25:53.828192 env[1123]: time="2024-02-12T20:25:53.828146629Z" level=error msg="ContainerStatus for \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\": not found" Feb 12 20:25:53.828387 kubelet[1390]: E0212 20:25:53.828315 1390 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\": not found" containerID="9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12" Feb 12 20:25:53.828387 kubelet[1390]: I0212 20:25:53.828353 1390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12} err="failed to get container status \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bfada9d35ffc9c6089e114e52d268fb878d0575257103b6fd908b372e66af12\": not found" Feb 12 20:25:53.828387 kubelet[1390]: I0212 20:25:53.828371 1390 scope.go:115] "RemoveContainer" containerID="cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5" Feb 12 20:25:53.828688 env[1123]: time="2024-02-12T20:25:53.828625211Z" level=error msg="ContainerStatus for \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\": not found" Feb 12 20:25:53.828860 kubelet[1390]: E0212 20:25:53.828837 1390 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\": not found" containerID="cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5" Feb 12 20:25:53.828921 kubelet[1390]: I0212 20:25:53.828884 1390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5} err="failed to get container status \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd0ea44e14e6ced49ff450cccfff8f6a1bdcb73e53c381649ad9a47d8e0d0ed5\": not found" Feb 12 20:25:53.828921 kubelet[1390]: I0212 20:25:53.828906 1390 scope.go:115] "RemoveContainer" containerID="d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac" Feb 12 20:25:53.829122 env[1123]: time="2024-02-12T20:25:53.829088120Z" level=error msg="ContainerStatus for \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\": not found" Feb 12 20:25:53.829249 kubelet[1390]: E0212 20:25:53.829228 1390 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\": not found" containerID="d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac" Feb 12 20:25:53.829313 kubelet[1390]: I0212 20:25:53.829266 1390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac} err="failed to get container status \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\": rpc error: code = NotFound desc = an error occurred when try to find container \"d60ba116632d20162257ca977aefc40235665f9d005bcf13f87356df6782abac\": not found" Feb 12 20:25:53.829313 kubelet[1390]: I0212 20:25:53.829291 1390 scope.go:115] "RemoveContainer" containerID="0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5" Feb 12 20:25:53.829504 env[1123]: time="2024-02-12T20:25:53.829458182Z" level=error msg="ContainerStatus for \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\": not found" Feb 12 20:25:53.829620 kubelet[1390]: E0212 20:25:53.829604 1390 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\": not found" containerID="0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5" Feb 12 20:25:53.829686 kubelet[1390]: I0212 20:25:53.829629 1390 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5} err="failed to get container status \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c95c6ca734f5b8e550e6854fbb9274ec12e42d401f43c548870a0d1020383a5\": not found" Feb 12 20:25:54.241831 kubelet[1390]: E0212 20:25:54.241774 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:55.242342 kubelet[1390]: E0212 20:25:55.242236 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:55.326918 kubelet[1390]: I0212 20:25:55.326879 1390 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f56760cc-fb61-46d9-b17e-54cdff3ecd3c path="/var/lib/kubelet/pods/f56760cc-fb61-46d9-b17e-54cdff3ecd3c/volumes" Feb 12 20:25:55.636722 kubelet[1390]: I0212 20:25:55.636679 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:55.636897 kubelet[1390]: E0212 20:25:55.636748 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="apply-sysctl-overwrites" Feb 12 20:25:55.636897 kubelet[1390]: E0212 20:25:55.636762 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="clean-cilium-state" Feb 12 20:25:55.636897 kubelet[1390]: E0212 20:25:55.636770 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="cilium-agent" Feb 12 20:25:55.636897 kubelet[1390]: E0212 20:25:55.636778 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="mount-cgroup" Feb 12 20:25:55.636897 kubelet[1390]: E0212 20:25:55.636787 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="mount-bpf-fs" Feb 12 20:25:55.636897 kubelet[1390]: I0212 20:25:55.636817 1390 memory_manager.go:346] "RemoveStaleState removing state" podUID="f56760cc-fb61-46d9-b17e-54cdff3ecd3c" containerName="cilium-agent" Feb 12 20:25:55.641458 systemd[1]: Created slice kubepods-besteffort-pod65de1bb6_1d50_4e59_b6a7_42b517802005.slice. Feb 12 20:25:55.696127 kubelet[1390]: I0212 20:25:55.696093 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:55.700233 systemd[1]: Created slice kubepods-burstable-pod724a287c_ca17_4d82_b919_2cba34c960b1.slice. Feb 12 20:25:55.713097 kubelet[1390]: I0212 20:25:55.713072 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65de1bb6-1d50-4e59-b6a7-42b517802005-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-6wb8z\" (UID: \"65de1bb6-1d50-4e59-b6a7-42b517802005\") " pod="kube-system/cilium-operator-f59cbd8c6-6wb8z" Feb 12 20:25:55.713157 kubelet[1390]: I0212 20:25:55.713103 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnpws\" (UniqueName: \"kubernetes.io/projected/65de1bb6-1d50-4e59-b6a7-42b517802005-kube-api-access-rnpws\") pod \"cilium-operator-f59cbd8c6-6wb8z\" (UID: \"65de1bb6-1d50-4e59-b6a7-42b517802005\") " pod="kube-system/cilium-operator-f59cbd8c6-6wb8z" Feb 12 20:25:55.813484 kubelet[1390]: I0212 20:25:55.813429 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cni-path\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813484 kubelet[1390]: I0212 20:25:55.813478 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-config-path\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813484 kubelet[1390]: I0212 20:25:55.813502 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-ipsec-secrets\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813736 kubelet[1390]: I0212 20:25:55.813657 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-bpf-maps\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813736 kubelet[1390]: I0212 20:25:55.813706 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-hostproc\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813736 kubelet[1390]: I0212 20:25:55.813726 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-cgroup\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813812 kubelet[1390]: I0212 20:25:55.813782 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-clustermesh-secrets\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813812 kubelet[1390]: I0212 20:25:55.813802 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-net\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813876 kubelet[1390]: I0212 20:25:55.813865 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-kernel\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813941 kubelet[1390]: I0212 20:25:55.813903 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-hubble-tls\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.813973 kubelet[1390]: I0212 20:25:55.813963 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-run\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.814002 kubelet[1390]: I0212 20:25:55.813987 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-etc-cni-netd\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.814026 kubelet[1390]: I0212 20:25:55.814013 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8z5\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-kube-api-access-2r8z5\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.814122 kubelet[1390]: I0212 20:25:55.814034 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-lib-modules\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.814122 kubelet[1390]: I0212 20:25:55.814051 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-xtables-lock\") pod \"cilium-h74x4\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " pod="kube-system/cilium-h74x4" Feb 12 20:25:55.943560 kubelet[1390]: E0212 20:25:55.943472 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:55.943936 env[1123]: time="2024-02-12T20:25:55.943905857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6wb8z,Uid:65de1bb6-1d50-4e59-b6a7-42b517802005,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:56.013324 kubelet[1390]: E0212 20:25:56.013229 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:56.013666 env[1123]: time="2024-02-12T20:25:56.013632839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h74x4,Uid:724a287c-ca17-4d82-b919-2cba34c960b1,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:56.016297 env[1123]: time="2024-02-12T20:25:56.016219724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:56.016297 env[1123]: time="2024-02-12T20:25:56.016262160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:56.016297 env[1123]: time="2024-02-12T20:25:56.016276689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:56.016538 env[1123]: time="2024-02-12T20:25:56.016493688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51d1d95ac965b37a4bfd3c7bd4c747c150ee9f045a0637c3d49c7c15d4ab9990 pid=3027 runtime=io.containerd.runc.v2 Feb 12 20:25:56.025967 systemd[1]: Started cri-containerd-51d1d95ac965b37a4bfd3c7bd4c747c150ee9f045a0637c3d49c7c15d4ab9990.scope. Feb 12 20:25:56.058163 env[1123]: time="2024-02-12T20:25:56.058108512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6wb8z,Uid:65de1bb6-1d50-4e59-b6a7-42b517802005,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d1d95ac965b37a4bfd3c7bd4c747c150ee9f045a0637c3d49c7c15d4ab9990\"" Feb 12 20:25:56.058857 kubelet[1390]: E0212 20:25:56.058702 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:56.059512 env[1123]: time="2024-02-12T20:25:56.059481570Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:25:56.150010 env[1123]: time="2024-02-12T20:25:56.149915393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:56.150010 env[1123]: time="2024-02-12T20:25:56.149962208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:56.150010 env[1123]: time="2024-02-12T20:25:56.149983410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:56.150259 env[1123]: time="2024-02-12T20:25:56.150204478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9 pid=3066 runtime=io.containerd.runc.v2 Feb 12 20:25:56.160856 systemd[1]: Started cri-containerd-0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9.scope. Feb 12 20:25:56.179530 env[1123]: time="2024-02-12T20:25:56.179481230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h74x4,Uid:724a287c-ca17-4d82-b919-2cba34c960b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\"" Feb 12 20:25:56.180090 kubelet[1390]: E0212 20:25:56.180072 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:56.181555 env[1123]: time="2024-02-12T20:25:56.181528562Z" level=info msg="CreateContainer within sandbox \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:56.242741 kubelet[1390]: E0212 20:25:56.242614 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:56.373006 env[1123]: time="2024-02-12T20:25:56.372918667Z" level=info msg="CreateContainer within sandbox \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\"" Feb 12 20:25:56.373692 env[1123]: time="2024-02-12T20:25:56.373646961Z" level=info msg="StartContainer for \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\"" Feb 12 20:25:56.386838 systemd[1]: Started cri-containerd-54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0.scope. Feb 12 20:25:56.395189 systemd[1]: cri-containerd-54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0.scope: Deactivated successfully. Feb 12 20:25:56.395519 systemd[1]: Stopped cri-containerd-54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0.scope. Feb 12 20:25:56.538608 env[1123]: time="2024-02-12T20:25:56.538470708Z" level=info msg="shim disconnected" id=54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0 Feb 12 20:25:56.538608 env[1123]: time="2024-02-12T20:25:56.538523946Z" level=warning msg="cleaning up after shim disconnected" id=54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0 namespace=k8s.io Feb 12 20:25:56.538608 env[1123]: time="2024-02-12T20:25:56.538533305Z" level=info msg="cleaning up dead shim" Feb 12 20:25:56.544803 env[1123]: time="2024-02-12T20:25:56.544748308Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3124 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:25:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:25:56.545178 env[1123]: time="2024-02-12T20:25:56.544979316Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 12 20:25:56.546392 env[1123]: time="2024-02-12T20:25:56.546340300Z" level=error msg="Failed to pipe stderr of container \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\"" error="reading from a closed fifo" Feb 12 20:25:56.547424 env[1123]: time="2024-02-12T20:25:56.547363029Z" level=error msg="Failed to pipe stdout of container \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\"" error="reading from a closed fifo" Feb 12 20:25:56.688086 env[1123]: time="2024-02-12T20:25:56.687977336Z" level=error msg="StartContainer for \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:25:56.688274 kubelet[1390]: E0212 20:25:56.688251 1390 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0" Feb 12 20:25:56.688540 kubelet[1390]: E0212 20:25:56.688392 1390 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:25:56.688540 kubelet[1390]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:25:56.688540 kubelet[1390]: rm /hostbin/cilium-mount Feb 12 20:25:56.688540 kubelet[1390]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2r8z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-h74x4_kube-system(724a287c-ca17-4d82-b919-2cba34c960b1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:25:56.688693 kubelet[1390]: E0212 20:25:56.688428 1390 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h74x4" podUID=724a287c-ca17-4d82-b919-2cba34c960b1 Feb 12 20:25:57.243574 kubelet[1390]: E0212 20:25:57.243529 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:57.394331 kubelet[1390]: I0212 20:25:57.394307 1390 setters.go:548] "Node became not ready" node="10.0.0.79" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:25:57.394247241 +0000 UTC m=+74.522221406 LastTransitionTime:2024-02-12 20:25:57.394247241 +0000 UTC m=+74.522221406 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:25:57.461477 env[1123]: time="2024-02-12T20:25:57.461438455Z" level=info msg="StopPodSandbox for \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\"" Feb 12 20:25:57.461836 env[1123]: time="2024-02-12T20:25:57.461483116Z" level=info msg="Container to stop \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:57.462936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9-shm.mount: Deactivated successfully. Feb 12 20:25:57.468591 systemd[1]: cri-containerd-0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9.scope: Deactivated successfully. Feb 12 20:25:57.481813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9-rootfs.mount: Deactivated successfully. Feb 12 20:25:57.595062 env[1123]: time="2024-02-12T20:25:57.594979514Z" level=info msg="shim disconnected" id=0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9 Feb 12 20:25:57.595062 env[1123]: time="2024-02-12T20:25:57.595030036Z" level=warning msg="cleaning up after shim disconnected" id=0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9 namespace=k8s.io Feb 12 20:25:57.595062 env[1123]: time="2024-02-12T20:25:57.595042741Z" level=info msg="cleaning up dead shim" Feb 12 20:25:57.600876 env[1123]: time="2024-02-12T20:25:57.600835369Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3154 runtime=io.containerd.runc.v2\n" Feb 12 20:25:57.601182 env[1123]: time="2024-02-12T20:25:57.601150326Z" level=info msg="TearDown network for sandbox \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\" successfully" Feb 12 20:25:57.601256 env[1123]: time="2024-02-12T20:25:57.601180077Z" level=info msg="StopPodSandbox for \"0e97962f9bb4771c246c644d3f3c83dc1f22fb7bb97c0a1365c8298a1c47c2e9\" returns successfully" Feb 12 20:25:57.724880 kubelet[1390]: I0212 20:25:57.724840 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-net\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.724880 kubelet[1390]: I0212 20:25:57.724883 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-xtables-lock\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725127 kubelet[1390]: I0212 20:25:57.724909 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r8z5\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-kube-api-access-2r8z5\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725127 kubelet[1390]: I0212 20:25:57.724926 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-run\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725127 kubelet[1390]: I0212 20:25:57.724922 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725127 kubelet[1390]: I0212 20:25:57.724943 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-hostproc\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725127 kubelet[1390]: I0212 20:25:57.724963 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.724978 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.724981 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.724999 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-cgroup\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.725018 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-clustermesh-secrets\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.725036 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-hubble-tls\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725369 kubelet[1390]: I0212 20:25:57.725057 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-etc-cni-netd\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725072 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-bpf-maps\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725086 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cni-path\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725105 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-ipsec-secrets\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725122 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-config-path\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725137 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-kernel\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725579 kubelet[1390]: I0212 20:25:57.725153 1390 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-lib-modules\") pod \"724a287c-ca17-4d82-b919-2cba34c960b1\" (UID: \"724a287c-ca17-4d82-b919-2cba34c960b1\") " Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725188 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-run\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725198 1390 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-hostproc\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725208 1390 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-net\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725217 1390 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-xtables-lock\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725230 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725774 kubelet[1390]: I0212 20:25:57.725265 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725963 kubelet[1390]: I0212 20:25:57.725310 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725963 kubelet[1390]: I0212 20:25:57.725332 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725963 kubelet[1390]: I0212 20:25:57.725352 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.725963 kubelet[1390]: W0212 20:25:57.725463 1390 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/724a287c-ca17-4d82-b919-2cba34c960b1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:25:57.726097 kubelet[1390]: I0212 20:25:57.725955 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:57.727130 kubelet[1390]: I0212 20:25:57.727111 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-kube-api-access-2r8z5" (OuterVolumeSpecName: "kube-api-access-2r8z5") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "kube-api-access-2r8z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:57.727222 kubelet[1390]: I0212 20:25:57.727128 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:57.727674 kubelet[1390]: I0212 20:25:57.727646 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:25:57.728518 kubelet[1390]: I0212 20:25:57.728488 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:57.728846 kubelet[1390]: I0212 20:25:57.728827 1390 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "724a287c-ca17-4d82-b919-2cba34c960b1" (UID: "724a287c-ca17-4d82-b919-2cba34c960b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:57.729184 systemd[1]: var-lib-kubelet-pods-724a287c\x2dca17\x2d4d82\x2db919\x2d2cba34c960b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2r8z5.mount: Deactivated successfully. Feb 12 20:25:57.729306 systemd[1]: var-lib-kubelet-pods-724a287c\x2dca17\x2d4d82\x2db919\x2d2cba34c960b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:25:57.731209 systemd[1]: var-lib-kubelet-pods-724a287c\x2dca17\x2d4d82\x2db919\x2d2cba34c960b1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:57.731276 systemd[1]: var-lib-kubelet-pods-724a287c\x2dca17\x2d4d82\x2db919\x2d2cba34c960b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:57.826305 kubelet[1390]: I0212 20:25:57.826227 1390 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-bpf-maps\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826305 kubelet[1390]: I0212 20:25:57.826266 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-cgroup\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826305 kubelet[1390]: I0212 20:25:57.826307 1390 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-clustermesh-secrets\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826326 1390 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-hubble-tls\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826337 1390 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-etc-cni-netd\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826346 1390 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-host-proc-sys-kernel\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826354 1390 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-cni-path\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826364 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-ipsec-secrets\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826386 1390 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724a287c-ca17-4d82-b919-2cba34c960b1-cilium-config-path\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826396 1390 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/724a287c-ca17-4d82-b919-2cba34c960b1-lib-modules\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:57.826528 kubelet[1390]: I0212 20:25:57.826404 1390 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2r8z5\" (UniqueName: \"kubernetes.io/projected/724a287c-ca17-4d82-b919-2cba34c960b1-kube-api-access-2r8z5\") on node \"10.0.0.79\" DevicePath \"\"" Feb 12 20:25:58.243955 kubelet[1390]: E0212 20:25:58.243915 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:58.259514 kubelet[1390]: E0212 20:25:58.259494 1390 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:25:58.464086 kubelet[1390]: I0212 20:25:58.464053 1390 scope.go:115] "RemoveContainer" containerID="54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0" Feb 12 20:25:58.465212 env[1123]: time="2024-02-12T20:25:58.465177089Z" level=info msg="RemoveContainer for \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\"" Feb 12 20:25:58.467799 systemd[1]: Removed slice kubepods-burstable-pod724a287c_ca17_4d82_b919_2cba34c960b1.slice. Feb 12 20:25:58.627141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265510753.mount: Deactivated successfully. Feb 12 20:25:58.655856 kubelet[1390]: I0212 20:25:58.655817 1390 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:58.655856 kubelet[1390]: E0212 20:25:58.655862 1390 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="724a287c-ca17-4d82-b919-2cba34c960b1" containerName="mount-cgroup" Feb 12 20:25:58.656060 kubelet[1390]: I0212 20:25:58.655882 1390 memory_manager.go:346] "RemoveStaleState removing state" podUID="724a287c-ca17-4d82-b919-2cba34c960b1" containerName="mount-cgroup" Feb 12 20:25:58.660137 systemd[1]: Created slice kubepods-burstable-podf7fda090_634b_478a_9ccc_5ff254cdb010.slice. Feb 12 20:25:58.731216 kubelet[1390]: I0212 20:25:58.731178 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-cilium-run\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731416 kubelet[1390]: I0212 20:25:58.731334 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7fda090-634b-478a-9ccc-5ff254cdb010-cilium-config-path\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731416 kubelet[1390]: I0212 20:25:58.731404 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-etc-cni-netd\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731477 kubelet[1390]: I0212 20:25:58.731432 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7fda090-634b-478a-9ccc-5ff254cdb010-clustermesh-secrets\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731503 kubelet[1390]: I0212 20:25:58.731483 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-hostproc\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731529 kubelet[1390]: I0212 20:25:58.731510 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-cilium-cgroup\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731566 kubelet[1390]: I0212 20:25:58.731552 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-cni-path\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731595 kubelet[1390]: I0212 20:25:58.731583 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zdnl\" (UniqueName: \"kubernetes.io/projected/f7fda090-634b-478a-9ccc-5ff254cdb010-kube-api-access-5zdnl\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731624 kubelet[1390]: I0212 20:25:58.731606 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-bpf-maps\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731648 kubelet[1390]: I0212 20:25:58.731633 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-host-proc-sys-net\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731730 kubelet[1390]: I0212 20:25:58.731692 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-host-proc-sys-kernel\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731730 kubelet[1390]: I0212 20:25:58.731747 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7fda090-634b-478a-9ccc-5ff254cdb010-hubble-tls\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731960 kubelet[1390]: I0212 20:25:58.731790 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-lib-modules\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731960 kubelet[1390]: I0212 20:25:58.731816 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7fda090-634b-478a-9ccc-5ff254cdb010-xtables-lock\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.731960 kubelet[1390]: I0212 20:25:58.731838 1390 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7fda090-634b-478a-9ccc-5ff254cdb010-cilium-ipsec-secrets\") pod \"cilium-dpzh8\" (UID: \"f7fda090-634b-478a-9ccc-5ff254cdb010\") " pod="kube-system/cilium-dpzh8" Feb 12 20:25:58.738355 env[1123]: time="2024-02-12T20:25:58.738300080Z" level=info msg="RemoveContainer for \"54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0\" returns successfully" Feb 12 20:25:58.971998 kubelet[1390]: E0212 20:25:58.971895 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.972425 env[1123]: time="2024-02-12T20:25:58.972348518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpzh8,Uid:f7fda090-634b-478a-9ccc-5ff254cdb010,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:59.244523 kubelet[1390]: E0212 20:25:59.244397 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:59.327688 kubelet[1390]: I0212 20:25:59.327655 1390 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=724a287c-ca17-4d82-b919-2cba34c960b1 path="/var/lib/kubelet/pods/724a287c-ca17-4d82-b919-2cba34c960b1/volumes" Feb 12 20:25:59.381651 env[1123]: time="2024-02-12T20:25:59.381574218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:59.381651 env[1123]: time="2024-02-12T20:25:59.381607495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:59.381651 env[1123]: time="2024-02-12T20:25:59.381616814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:59.381893 env[1123]: time="2024-02-12T20:25:59.381727056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08 pid=3181 runtime=io.containerd.runc.v2 Feb 12 20:25:59.391335 systemd[1]: Started cri-containerd-efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08.scope. Feb 12 20:25:59.407873 env[1123]: time="2024-02-12T20:25:59.407829809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpzh8,Uid:f7fda090-634b-478a-9ccc-5ff254cdb010,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\"" Feb 12 20:25:59.408891 kubelet[1390]: E0212 20:25:59.408875 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:59.410646 env[1123]: time="2024-02-12T20:25:59.410620630Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:59.558165 env[1123]: time="2024-02-12T20:25:59.557489108Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63\"" Feb 12 20:25:59.558579 env[1123]: time="2024-02-12T20:25:59.558219562Z" level=info msg="StartContainer for \"e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63\"" Feb 12 20:25:59.571588 systemd[1]: Started cri-containerd-e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63.scope. Feb 12 20:25:59.601930 systemd[1]: cri-containerd-e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63.scope: Deactivated successfully. Feb 12 20:25:59.611533 env[1123]: time="2024-02-12T20:25:59.611482010Z" level=info msg="StartContainer for \"e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63\" returns successfully" Feb 12 20:25:59.634365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63-rootfs.mount: Deactivated successfully. Feb 12 20:25:59.653869 kubelet[1390]: W0212 20:25:59.653823 1390 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod724a287c_ca17_4d82_b919_2cba34c960b1.slice/cri-containerd-54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0.scope WatchSource:0}: container "54533980ee10387bd6c88666208ed510cda18fd85071a40cad13ca8e3b9e0ea0" in namespace "k8s.io": not found Feb 12 20:25:59.991390 env[1123]: time="2024-02-12T20:25:59.991339166Z" level=info msg="shim disconnected" id=e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63 Feb 12 20:25:59.991621 env[1123]: time="2024-02-12T20:25:59.991601194Z" level=warning msg="cleaning up after shim disconnected" id=e0aefb5e89b22081d2d78d31789bfe1895b433cdbd9b786968b1c491a7d61b63 namespace=k8s.io Feb 12 20:25:59.991714 env[1123]: time="2024-02-12T20:25:59.991696477Z" level=info msg="cleaning up dead shim" Feb 12 20:25:59.997380 env[1123]: time="2024-02-12T20:25:59.997359752Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3264 runtime=io.containerd.runc.v2\n" Feb 12 20:26:00.245465 kubelet[1390]: E0212 20:26:00.245257 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:00.460629 env[1123]: time="2024-02-12T20:26:00.460561667Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:00.470296 kubelet[1390]: E0212 20:26:00.470259 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:00.471925 env[1123]: time="2024-02-12T20:26:00.471867097Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:26:00.560426 env[1123]: time="2024-02-12T20:26:00.560081635Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:00.657205 env[1123]: time="2024-02-12T20:26:00.657128126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:00.657726 env[1123]: time="2024-02-12T20:26:00.657697073Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:26:00.659313 env[1123]: time="2024-02-12T20:26:00.659270615Z" level=info msg="CreateContainer within sandbox \"51d1d95ac965b37a4bfd3c7bd4c747c150ee9f045a0637c3d49c7c15d4ab9990\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:26:00.673912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890658746.mount: Deactivated successfully. Feb 12 20:26:00.851339 env[1123]: time="2024-02-12T20:26:00.851251693Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e\"" Feb 12 20:26:00.865412 env[1123]: time="2024-02-12T20:26:00.851741850Z" level=info msg="StartContainer for \"ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e\"" Feb 12 20:26:00.878351 systemd[1]: Started cri-containerd-ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e.scope. Feb 12 20:26:01.014864 systemd[1]: cri-containerd-ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e.scope: Deactivated successfully. Feb 12 20:26:01.114452 env[1123]: time="2024-02-12T20:26:01.114316380Z" level=info msg="StartContainer for \"ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e\" returns successfully" Feb 12 20:26:01.245584 kubelet[1390]: E0212 20:26:01.245540 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:01.253049 env[1123]: time="2024-02-12T20:26:01.253010011Z" level=info msg="shim disconnected" id=ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e Feb 12 20:26:01.253123 env[1123]: time="2024-02-12T20:26:01.253056234Z" level=warning msg="cleaning up after shim disconnected" id=ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e namespace=k8s.io Feb 12 20:26:01.253123 env[1123]: time="2024-02-12T20:26:01.253067247Z" level=info msg="cleaning up dead shim" Feb 12 20:26:01.258965 env[1123]: time="2024-02-12T20:26:01.258930958Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3331 runtime=io.containerd.runc.v2\n" Feb 12 20:26:01.347880 env[1123]: time="2024-02-12T20:26:01.347835859Z" level=info msg="CreateContainer within sandbox \"51d1d95ac965b37a4bfd3c7bd4c747c150ee9f045a0637c3d49c7c15d4ab9990\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63f0a0d8a5b2029f670e69952d0ae00410d3ca549f1f50375de52b7f6b6a0c4c\"" Feb 12 20:26:01.348228 env[1123]: time="2024-02-12T20:26:01.348208860Z" level=info msg="StartContainer for \"63f0a0d8a5b2029f670e69952d0ae00410d3ca549f1f50375de52b7f6b6a0c4c\"" Feb 12 20:26:01.360457 systemd[1]: Started cri-containerd-63f0a0d8a5b2029f670e69952d0ae00410d3ca549f1f50375de52b7f6b6a0c4c.scope. Feb 12 20:26:01.425869 env[1123]: time="2024-02-12T20:26:01.425756786Z" level=info msg="StartContainer for \"63f0a0d8a5b2029f670e69952d0ae00410d3ca549f1f50375de52b7f6b6a0c4c\" returns successfully" Feb 12 20:26:01.472511 kubelet[1390]: E0212 20:26:01.472486 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:01.473971 kubelet[1390]: E0212 20:26:01.473958 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:01.475574 env[1123]: time="2024-02-12T20:26:01.475534162Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:26:01.497423 kubelet[1390]: I0212 20:26:01.497401 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-6wb8z" podStartSLOduration=-9.223372030357405e+09 pod.CreationTimestamp="2024-02-12 20:25:55 +0000 UTC" firstStartedPulling="2024-02-12 20:25:56.059227896 +0000 UTC m=+73.187202061" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:01.497119592 +0000 UTC m=+78.625093747" watchObservedRunningTime="2024-02-12 20:26:01.497371901 +0000 UTC m=+78.625346066" Feb 12 20:26:01.672820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebf848847547a48d2c9d6404585b13c92d5865e42d30a3cd3f946faa189bd74e-rootfs.mount: Deactivated successfully. Feb 12 20:26:01.735908 env[1123]: time="2024-02-12T20:26:01.735733121Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe\"" Feb 12 20:26:01.736674 env[1123]: time="2024-02-12T20:26:01.736602603Z" level=info msg="StartContainer for \"9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe\"" Feb 12 20:26:01.754169 systemd[1]: Started cri-containerd-9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe.scope. Feb 12 20:26:01.796057 systemd[1]: cri-containerd-9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe.scope: Deactivated successfully. Feb 12 20:26:01.944245 env[1123]: time="2024-02-12T20:26:01.944163533Z" level=info msg="StartContainer for \"9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe\" returns successfully" Feb 12 20:26:01.958595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe-rootfs.mount: Deactivated successfully. Feb 12 20:26:02.074815 env[1123]: time="2024-02-12T20:26:02.074683143Z" level=info msg="shim disconnected" id=9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe Feb 12 20:26:02.074815 env[1123]: time="2024-02-12T20:26:02.074748044Z" level=warning msg="cleaning up after shim disconnected" id=9dadca3ce7afa9688585321685e7da533777d1e7e819c666747797c09e595abe namespace=k8s.io Feb 12 20:26:02.074815 env[1123]: time="2024-02-12T20:26:02.074757082Z" level=info msg="cleaning up dead shim" Feb 12 20:26:02.080415 env[1123]: time="2024-02-12T20:26:02.080364359Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3426 runtime=io.containerd.runc.v2\n" Feb 12 20:26:02.245998 kubelet[1390]: E0212 20:26:02.245946 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.477707 kubelet[1390]: E0212 20:26:02.477673 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:02.477882 kubelet[1390]: E0212 20:26:02.477725 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:02.479151 env[1123]: time="2024-02-12T20:26:02.479116797Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:26:02.958925 env[1123]: time="2024-02-12T20:26:02.958865197Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38\"" Feb 12 20:26:02.959426 env[1123]: time="2024-02-12T20:26:02.959396938Z" level=info msg="StartContainer for \"650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38\"" Feb 12 20:26:02.973543 systemd[1]: Started cri-containerd-650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38.scope. Feb 12 20:26:02.991842 systemd[1]: cri-containerd-650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38.scope: Deactivated successfully. Feb 12 20:26:03.114661 env[1123]: time="2024-02-12T20:26:03.114583059Z" level=info msg="StartContainer for \"650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38\" returns successfully" Feb 12 20:26:03.126222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38-rootfs.mount: Deactivated successfully. Feb 12 20:26:03.195931 kubelet[1390]: E0212 20:26:03.195853 1390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:03.246414 kubelet[1390]: E0212 20:26:03.246262 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:03.259846 kubelet[1390]: E0212 20:26:03.259822 1390 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:26:03.274151 env[1123]: time="2024-02-12T20:26:03.274104413Z" level=info msg="shim disconnected" id=650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38 Feb 12 20:26:03.274151 env[1123]: time="2024-02-12T20:26:03.274145566Z" level=warning msg="cleaning up after shim disconnected" id=650e77334b0239ccb36fe60a67f0f821bcb4b7294261e64f97121c6153452c38 namespace=k8s.io Feb 12 20:26:03.274151 env[1123]: time="2024-02-12T20:26:03.274153742Z" level=info msg="cleaning up dead shim" Feb 12 20:26:03.280222 env[1123]: time="2024-02-12T20:26:03.280175566Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3480 runtime=io.containerd.runc.v2\n" Feb 12 20:26:03.481338 kubelet[1390]: E0212 20:26:03.481306 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:03.483458 env[1123]: time="2024-02-12T20:26:03.483410589Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:26:03.724561 env[1123]: time="2024-02-12T20:26:03.724458543Z" level=info msg="CreateContainer within sandbox \"efd085380d7c947a9da1a9967067ae3ff818a965da5a8c96eeb380083556ad08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95\"" Feb 12 20:26:03.725269 env[1123]: time="2024-02-12T20:26:03.725228472Z" level=info msg="StartContainer for \"6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95\"" Feb 12 20:26:03.738526 systemd[1]: Started cri-containerd-6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95.scope. Feb 12 20:26:03.808118 env[1123]: time="2024-02-12T20:26:03.808043082Z" level=info msg="StartContainer for \"6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95\" returns successfully" Feb 12 20:26:03.822651 systemd[1]: run-containerd-runc-k8s.io-6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95-runc.6AL02t.mount: Deactivated successfully. Feb 12 20:26:04.022315 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:26:04.246487 kubelet[1390]: E0212 20:26:04.246433 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:04.485888 kubelet[1390]: E0212 20:26:04.485842 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:04.506160 kubelet[1390]: I0212 20:26:04.503888 1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dpzh8" podStartSLOduration=6.503806303 pod.CreationTimestamp="2024-02-12 20:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:04.503090644 +0000 UTC m=+81.631064809" watchObservedRunningTime="2024-02-12 20:26:04.503806303 +0000 UTC m=+81.631780468" Feb 12 20:26:05.247149 kubelet[1390]: E0212 20:26:05.247092 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:05.487916 kubelet[1390]: E0212 20:26:05.487879 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:06.248179 kubelet[1390]: E0212 20:26:06.248142 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:06.488924 kubelet[1390]: E0212 20:26:06.488897 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:06.503681 systemd-networkd[1027]: lxc_health: Link UP Feb 12 20:26:06.516924 systemd-networkd[1027]: lxc_health: Gained carrier Feb 12 20:26:06.517405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:26:07.248328 kubelet[1390]: E0212 20:26:07.248262 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:07.490216 kubelet[1390]: E0212 20:26:07.490193 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:08.248455 kubelet[1390]: E0212 20:26:08.248399 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:08.343046 systemd-networkd[1027]: lxc_health: Gained IPv6LL Feb 12 20:26:08.492015 kubelet[1390]: E0212 20:26:08.491986 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:08.510126 systemd[1]: run-containerd-runc-k8s.io-6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95-runc.Da2YW7.mount: Deactivated successfully. Feb 12 20:26:09.248760 kubelet[1390]: E0212 20:26:09.248718 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:09.493174 kubelet[1390]: E0212 20:26:09.493144 1390 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:10.249393 kubelet[1390]: E0212 20:26:10.249321 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:11.249495 kubelet[1390]: E0212 20:26:11.249451 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.250457 kubelet[1390]: E0212 20:26:12.250396 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.675916 systemd[1]: run-containerd-runc-k8s.io-6a15e044a68bedf2a02a0ec37d4af579be2837502e7231292e8a41799ebc3a95-runc.4ZQfXM.mount: Deactivated successfully. Feb 12 20:26:13.250942 kubelet[1390]: E0212 20:26:13.250893 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:14.251596 kubelet[1390]: E0212 20:26:14.251548 1390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"