Feb 8 23:23:25.787198 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:23:25.787216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:25.787224 kernel: BIOS-provided physical RAM map: Feb 8 23:23:25.787230 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:23:25.787235 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:23:25.787240 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:23:25.787254 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 8 23:23:25.787259 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 8 23:23:25.787266 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:23:25.787271 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:23:25.787277 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 8 23:23:25.787282 kernel: NX (Execute Disable) protection: active Feb 8 23:23:25.787288 kernel: SMBIOS 2.8 present. Feb 8 23:23:25.787293 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 8 23:23:25.787301 kernel: Hypervisor detected: KVM Feb 8 23:23:25.787307 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:23:25.787313 kernel: kvm-clock: cpu 0, msr 6bfaa001, primary cpu clock Feb 8 23:23:25.787319 kernel: kvm-clock: using sched offset of 2178817905 cycles Feb 8 23:23:25.787325 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:23:25.787332 kernel: tsc: Detected 2794.750 MHz processor Feb 8 23:23:25.787338 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:23:25.787344 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:23:25.787350 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 8 23:23:25.787357 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:23:25.787364 kernel: Using GB pages for direct mapping Feb 8 23:23:25.787370 kernel: ACPI: Early table checksum verification disabled Feb 8 23:23:25.787376 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 8 23:23:25.787382 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787388 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787394 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787400 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 8 23:23:25.787405 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787412 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787418 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:23:25.787424 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 8 23:23:25.787430 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 8 23:23:25.787436 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 8 23:23:25.787442 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 8 23:23:25.787448 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 8 23:23:25.787454 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 8 23:23:25.787463 kernel: No NUMA configuration found Feb 8 23:23:25.787470 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 8 23:23:25.787476 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 8 23:23:25.787483 kernel: Zone ranges: Feb 8 23:23:25.787489 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:23:25.787496 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 8 23:23:25.787503 kernel: Normal empty Feb 8 23:23:25.787510 kernel: Movable zone start for each node Feb 8 23:23:25.787516 kernel: Early memory node ranges Feb 8 23:23:25.787522 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:23:25.787529 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 8 23:23:25.787535 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 8 23:23:25.787541 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:23:25.787548 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:23:25.787561 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 8 23:23:25.787569 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:23:25.787576 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:23:25.787582 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:23:25.787589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:23:25.787595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:23:25.787602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:23:25.787608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:23:25.787614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:23:25.787621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:23:25.787628 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 8 23:23:25.787635 kernel: TSC deadline timer available Feb 8 23:23:25.787641 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 8 23:23:25.787647 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 8 23:23:25.787654 kernel: kvm-guest: setup PV sched yield Feb 8 23:23:25.787660 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 8 23:23:25.787666 kernel: Booting paravirtualized kernel on KVM Feb 8 23:23:25.787673 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:23:25.787679 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 8 23:23:25.787687 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 8 23:23:25.787693 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 8 23:23:25.787700 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 8 23:23:25.787706 kernel: kvm-guest: setup async PF for cpu 0 Feb 8 23:23:25.787712 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 8 23:23:25.787719 kernel: kvm-guest: PV spinlocks enabled Feb 8 23:23:25.787725 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:23:25.787732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 8 23:23:25.787738 kernel: Policy zone: DMA32 Feb 8 23:23:25.787745 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:25.787754 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:23:25.787760 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:23:25.787767 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:23:25.787773 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:23:25.787780 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 8 23:23:25.787787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 8 23:23:25.787793 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:23:25.787800 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:23:25.787807 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:23:25.787814 kernel: rcu: RCU event tracing is enabled. Feb 8 23:23:25.787820 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 8 23:23:25.787827 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:23:25.787833 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:23:25.787840 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:23:25.787846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 8 23:23:25.787853 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 8 23:23:25.787859 kernel: random: crng init done Feb 8 23:23:25.787867 kernel: Console: colour VGA+ 80x25 Feb 8 23:23:25.787873 kernel: printk: console [ttyS0] enabled Feb 8 23:23:25.787880 kernel: ACPI: Core revision 20210730 Feb 8 23:23:25.787886 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 8 23:23:25.787893 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:23:25.787899 kernel: x2apic enabled Feb 8 23:23:25.787905 kernel: Switched APIC routing to physical x2apic. Feb 8 23:23:25.787912 kernel: kvm-guest: setup PV IPIs Feb 8 23:23:25.787918 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:23:25.787926 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:23:25.787932 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 8 23:23:25.787938 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 8 23:23:25.787945 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 8 23:23:25.787951 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 8 23:23:25.787958 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:23:25.787964 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:23:25.787971 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:23:25.787977 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:23:25.787989 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 8 23:23:25.787996 kernel: RETBleed: Mitigation: untrained return thunk Feb 8 23:23:25.788003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 8 23:23:25.788011 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 8 23:23:25.788018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:23:25.788024 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:23:25.788031 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:23:25.788038 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:23:25.788045 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 8 23:23:25.788053 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:23:25.788060 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:23:25.788066 kernel: LSM: Security Framework initializing Feb 8 23:23:25.788073 kernel: SELinux: Initializing. Feb 8 23:23:25.788080 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:23:25.788087 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:23:25.788094 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 8 23:23:25.788101 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 8 23:23:25.788108 kernel: ... version: 0 Feb 8 23:23:25.788115 kernel: ... bit width: 48 Feb 8 23:23:25.788121 kernel: ... generic registers: 6 Feb 8 23:23:25.788128 kernel: ... value mask: 0000ffffffffffff Feb 8 23:23:25.788135 kernel: ... max period: 00007fffffffffff Feb 8 23:23:25.788141 kernel: ... fixed-purpose events: 0 Feb 8 23:23:25.788148 kernel: ... event mask: 000000000000003f Feb 8 23:23:25.788155 kernel: signal: max sigframe size: 1776 Feb 8 23:23:25.788163 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:23:25.788170 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:23:25.788176 kernel: x86: Booting SMP configuration: Feb 8 23:23:25.788183 kernel: .... node #0, CPUs: #1 Feb 8 23:23:25.788190 kernel: kvm-clock: cpu 1, msr 6bfaa041, secondary cpu clock Feb 8 23:23:25.788197 kernel: kvm-guest: setup async PF for cpu 1 Feb 8 23:23:25.788203 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 8 23:23:25.788210 kernel: #2 Feb 8 23:23:25.788217 kernel: kvm-clock: cpu 2, msr 6bfaa081, secondary cpu clock Feb 8 23:23:25.788224 kernel: kvm-guest: setup async PF for cpu 2 Feb 8 23:23:25.788232 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 8 23:23:25.788238 kernel: #3 Feb 8 23:23:25.788249 kernel: kvm-clock: cpu 3, msr 6bfaa0c1, secondary cpu clock Feb 8 23:23:25.788256 kernel: kvm-guest: setup async PF for cpu 3 Feb 8 23:23:25.788262 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 8 23:23:25.788269 kernel: smp: Brought up 1 node, 4 CPUs Feb 8 23:23:25.788276 kernel: smpboot: Max logical packages: 1 Feb 8 23:23:25.788283 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 8 23:23:25.788289 kernel: devtmpfs: initialized Feb 8 23:23:25.788298 kernel: x86/mm: Memory block size: 128MB Feb 8 23:23:25.788305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:23:25.788312 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 8 23:23:25.788318 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:23:25.788325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:23:25.788332 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:23:25.788339 kernel: audit: type=2000 audit(1707434606.113:1): state=initialized audit_enabled=0 res=1 Feb 8 23:23:25.788345 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:23:25.788352 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:23:25.788360 kernel: cpuidle: using governor menu Feb 8 23:23:25.788367 kernel: ACPI: bus type PCI registered Feb 8 23:23:25.788374 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:23:25.788380 kernel: dca service started, version 1.12.1 Feb 8 23:23:25.788387 kernel: PCI: Using configuration type 1 for base access Feb 8 23:23:25.788394 kernel: PCI: Using configuration type 1 for extended access Feb 8 23:23:25.788401 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:23:25.788407 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:23:25.788414 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:23:25.788422 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:23:25.788429 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:23:25.788436 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:23:25.788442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:23:25.788449 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:23:25.788456 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:23:25.788463 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:23:25.788469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:23:25.788476 kernel: ACPI: Interpreter enabled Feb 8 23:23:25.788484 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:23:25.788491 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:23:25.788498 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:23:25.788504 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:23:25.788511 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:23:25.789186 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:23:25.789202 kernel: acpiphp: Slot [3] registered Feb 8 23:23:25.789209 kernel: acpiphp: Slot [4] registered Feb 8 23:23:25.789219 kernel: acpiphp: Slot [5] registered Feb 8 23:23:25.789226 kernel: acpiphp: Slot [6] registered Feb 8 23:23:25.789232 kernel: acpiphp: Slot [7] registered Feb 8 23:23:25.789239 kernel: acpiphp: Slot [8] registered Feb 8 23:23:25.789252 kernel: acpiphp: Slot [9] registered Feb 8 23:23:25.789260 kernel: acpiphp: Slot [10] registered Feb 8 23:23:25.789266 kernel: acpiphp: Slot [11] registered Feb 8 23:23:25.789273 kernel: acpiphp: Slot [12] registered Feb 8 23:23:25.789280 kernel: acpiphp: Slot [13] registered Feb 8 23:23:25.789286 kernel: acpiphp: Slot [14] registered Feb 8 23:23:25.789294 kernel: acpiphp: Slot [15] registered Feb 8 23:23:25.789301 kernel: acpiphp: Slot [16] registered Feb 8 23:23:25.789308 kernel: acpiphp: Slot [17] registered Feb 8 23:23:25.789314 kernel: acpiphp: Slot [18] registered Feb 8 23:23:25.789321 kernel: acpiphp: Slot [19] registered Feb 8 23:23:25.789328 kernel: acpiphp: Slot [20] registered Feb 8 23:23:25.789334 kernel: acpiphp: Slot [21] registered Feb 8 23:23:25.789341 kernel: acpiphp: Slot [22] registered Feb 8 23:23:25.789347 kernel: acpiphp: Slot [23] registered Feb 8 23:23:25.789355 kernel: acpiphp: Slot [24] registered Feb 8 23:23:25.789362 kernel: acpiphp: Slot [25] registered Feb 8 23:23:25.789368 kernel: acpiphp: Slot [26] registered Feb 8 23:23:25.789375 kernel: acpiphp: Slot [27] registered Feb 8 23:23:25.789381 kernel: acpiphp: Slot [28] registered Feb 8 23:23:25.789388 kernel: acpiphp: Slot [29] registered Feb 8 23:23:25.789394 kernel: acpiphp: Slot [30] registered Feb 8 23:23:25.789401 kernel: acpiphp: Slot [31] registered Feb 8 23:23:25.789408 kernel: PCI host bridge to bus 0000:00 Feb 8 23:23:25.789489 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:23:25.789565 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:23:25.789628 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:23:25.789686 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 8 23:23:25.789745 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:23:25.789803 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:23:25.789882 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:23:25.789975 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:23:25.790067 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:23:25.790140 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 8 23:23:25.790281 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:23:25.790353 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:23:25.790420 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:23:25.790502 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:23:25.790604 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:23:25.790674 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:23:25.790740 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:23:25.790811 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 8 23:23:25.790878 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 8 23:23:25.790944 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 8 23:23:25.791036 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 8 23:23:25.791111 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:23:25.791193 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:23:25.791271 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 8 23:23:25.791343 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 8 23:23:25.791411 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 8 23:23:25.791485 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:23:25.791565 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:23:25.791636 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 8 23:23:25.791702 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 8 23:23:25.791776 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:23:25.791843 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 8 23:23:25.791910 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 8 23:23:25.791978 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 8 23:23:25.792052 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 8 23:23:25.792061 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:23:25.792068 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:23:25.792075 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:23:25.792082 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:23:25.792089 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:23:25.792095 kernel: iommu: Default domain type: Translated Feb 8 23:23:25.792102 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:23:25.792168 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:23:25.792239 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:23:25.792313 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:23:25.792322 kernel: vgaarb: loaded Feb 8 23:23:25.792329 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:23:25.792336 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:23:25.792343 kernel: PTP clock support registered Feb 8 23:23:25.792350 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:23:25.792357 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:23:25.792365 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:23:25.792372 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 8 23:23:25.792379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 8 23:23:25.792386 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 8 23:23:25.792392 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:23:25.792399 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:23:25.792406 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:23:25.792413 kernel: pnp: PnP ACPI init Feb 8 23:23:25.792496 kernel: pnp 00:02: [dma 2] Feb 8 23:23:25.792508 kernel: pnp: PnP ACPI: found 6 devices Feb 8 23:23:25.792515 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:23:25.792522 kernel: NET: Registered PF_INET protocol family Feb 8 23:23:25.792529 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:23:25.792537 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 8 23:23:25.792543 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:23:25.792550 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:23:25.792566 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 8 23:23:25.792574 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 8 23:23:25.792581 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:23:25.792588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:23:25.792595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:23:25.792601 kernel: NET: Registered PF_XDP protocol family Feb 8 23:23:25.792664 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:23:25.792725 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:23:25.792784 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:23:25.792844 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 8 23:23:25.792907 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:23:25.792976 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:23:25.793043 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:23:25.793126 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:23:25.793136 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:23:25.793143 kernel: Initialise system trusted keyrings Feb 8 23:23:25.793150 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 8 23:23:25.793157 kernel: Key type asymmetric registered Feb 8 23:23:25.793166 kernel: Asymmetric key parser 'x509' registered Feb 8 23:23:25.793173 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:23:25.793179 kernel: io scheduler mq-deadline registered Feb 8 23:23:25.793186 kernel: io scheduler kyber registered Feb 8 23:23:25.793193 kernel: io scheduler bfq registered Feb 8 23:23:25.793200 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:23:25.793207 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:23:25.793214 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 8 23:23:25.793221 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:23:25.793228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:23:25.793235 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:23:25.793260 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:23:25.793267 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:23:25.793274 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:23:25.793281 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:23:25.793369 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 8 23:23:25.793457 kernel: rtc_cmos 00:05: registered as rtc0 Feb 8 23:23:25.793535 kernel: rtc_cmos 00:05: setting system clock to 2024-02-08T23:23:25 UTC (1707434605) Feb 8 23:23:25.793631 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 8 23:23:25.793642 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:23:25.793649 kernel: Segment Routing with IPv6 Feb 8 23:23:25.793656 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:23:25.793662 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:23:25.793669 kernel: Key type dns_resolver registered Feb 8 23:23:25.793676 kernel: IPI shorthand broadcast: enabled Feb 8 23:23:25.793683 kernel: sched_clock: Marking stable (363281558, 71897778)->(440776496, -5597160) Feb 8 23:23:25.793692 kernel: registered taskstats version 1 Feb 8 23:23:25.793699 kernel: Loading compiled-in X.509 certificates Feb 8 23:23:25.793706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:23:25.793712 kernel: Key type .fscrypt registered Feb 8 23:23:25.793719 kernel: Key type fscrypt-provisioning registered Feb 8 23:23:25.793726 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:23:25.793743 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:23:25.793750 kernel: ima: No architecture policies found Feb 8 23:23:25.793757 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:23:25.793765 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:23:25.793772 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:23:25.793779 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:23:25.793785 kernel: Run /init as init process Feb 8 23:23:25.793792 kernel: with arguments: Feb 8 23:23:25.793798 kernel: /init Feb 8 23:23:25.793805 kernel: with environment: Feb 8 23:23:25.793822 kernel: HOME=/ Feb 8 23:23:25.793830 kernel: TERM=linux Feb 8 23:23:25.793849 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:23:25.793859 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:23:25.793868 systemd[1]: Detected virtualization kvm. Feb 8 23:23:25.793876 systemd[1]: Detected architecture x86-64. Feb 8 23:23:25.793883 systemd[1]: Running in initrd. Feb 8 23:23:25.793890 systemd[1]: No hostname configured, using default hostname. Feb 8 23:23:25.793898 systemd[1]: Hostname set to . Feb 8 23:23:25.793907 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:23:25.793914 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:23:25.793922 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:23:25.793929 systemd[1]: Reached target cryptsetup.target. Feb 8 23:23:25.793947 systemd[1]: Reached target paths.target. Feb 8 23:23:25.793954 systemd[1]: Reached target slices.target. Feb 8 23:23:25.793961 systemd[1]: Reached target swap.target. Feb 8 23:23:25.793969 systemd[1]: Reached target timers.target. Feb 8 23:23:25.793978 systemd[1]: Listening on iscsid.socket. Feb 8 23:23:25.793986 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:23:25.793993 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:23:25.794001 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:23:25.794008 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:23:25.794016 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:23:25.794023 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:23:25.794031 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:23:25.794049 systemd[1]: Reached target sockets.target. Feb 8 23:23:25.794057 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:23:25.794065 systemd[1]: Finished network-cleanup.service. Feb 8 23:23:25.794072 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:23:25.794080 systemd[1]: Starting systemd-journald.service... Feb 8 23:23:25.794087 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:23:25.794096 systemd[1]: Starting systemd-resolved.service... Feb 8 23:23:25.794104 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:23:25.794111 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:23:25.794127 systemd-journald[198]: Journal started Feb 8 23:23:25.794170 systemd-journald[198]: Runtime Journal (/run/log/journal/fdb016f795af4dc2bc207e28261d249f) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:23:25.789180 systemd-modules-load[199]: Inserted module 'overlay' Feb 8 23:23:25.814049 kernel: audit: type=1130 audit(1707434605.808:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.814074 systemd[1]: Started systemd-journald.service. Feb 8 23:23:25.814086 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:23:25.814102 kernel: Bridge firewalling registered Feb 8 23:23:25.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.802502 systemd-resolved[200]: Positive Trust Anchors: Feb 8 23:23:25.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.802510 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:23:25.818726 kernel: audit: type=1130 audit(1707434605.814:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.818741 kernel: audit: type=1130 audit(1707434605.818:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.802536 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:23:25.804654 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 8 23:23:25.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.813986 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 8 23:23:25.827853 kernel: audit: type=1130 audit(1707434605.824:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.815199 systemd[1]: Started systemd-resolved.service. Feb 8 23:23:25.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.818842 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:23:25.825465 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:23:25.829064 systemd[1]: Reached target nss-lookup.target. Feb 8 23:23:25.831569 kernel: audit: type=1130 audit(1707434605.828:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.833464 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:23:25.834376 kernel: SCSI subsystem initialized Feb 8 23:23:25.835132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:23:25.840583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:23:25.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.844579 kernel: audit: type=1130 audit(1707434605.841:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.844620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:23:25.847275 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:23:25.847329 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:23:25.850149 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 8 23:23:25.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.850420 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:23:25.855378 kernel: audit: type=1130 audit(1707434605.851:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.855410 kernel: audit: type=1130 audit(1707434605.854:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.852035 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:23:25.856075 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:23:25.859280 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:23:25.865484 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:23:25.868795 kernel: audit: type=1130 audit(1707434605.865:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.871997 dracut-cmdline[218]: dracut-dracut-053 Feb 8 23:23:25.873740 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:23:25.926579 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:23:25.936581 kernel: iscsi: registered transport (tcp) Feb 8 23:23:25.955579 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:23:25.955600 kernel: QLogic iSCSI HBA Driver Feb 8 23:23:25.977957 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:23:25.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:25.979099 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:23:26.025587 kernel: raid6: avx2x4 gen() 29838 MB/s Feb 8 23:23:26.042583 kernel: raid6: avx2x4 xor() 7094 MB/s Feb 8 23:23:26.059582 kernel: raid6: avx2x2 gen() 31806 MB/s Feb 8 23:23:26.076597 kernel: raid6: avx2x2 xor() 19260 MB/s Feb 8 23:23:26.093584 kernel: raid6: avx2x1 gen() 26295 MB/s Feb 8 23:23:26.110584 kernel: raid6: avx2x1 xor() 15309 MB/s Feb 8 23:23:26.127582 kernel: raid6: sse2x4 gen() 14332 MB/s Feb 8 23:23:26.144581 kernel: raid6: sse2x4 xor() 6932 MB/s Feb 8 23:23:26.161586 kernel: raid6: sse2x2 gen() 16316 MB/s Feb 8 23:23:26.178580 kernel: raid6: sse2x2 xor() 9840 MB/s Feb 8 23:23:26.195597 kernel: raid6: sse2x1 gen() 11780 MB/s Feb 8 23:23:26.212621 kernel: raid6: sse2x1 xor() 7819 MB/s Feb 8 23:23:26.212660 kernel: raid6: using algorithm avx2x2 gen() 31806 MB/s Feb 8 23:23:26.212670 kernel: raid6: .... xor() 19260 MB/s, rmw enabled Feb 8 23:23:26.213577 kernel: raid6: using avx2x2 recovery algorithm Feb 8 23:23:26.224578 kernel: xor: automatically using best checksumming function avx Feb 8 23:23:26.312599 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:23:26.320878 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:23:26.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:26.321000 audit: BPF prog-id=7 op=LOAD Feb 8 23:23:26.321000 audit: BPF prog-id=8 op=LOAD Feb 8 23:23:26.322492 systemd[1]: Starting systemd-udevd.service... Feb 8 23:23:26.334610 systemd-udevd[401]: Using default interface naming scheme 'v252'. Feb 8 23:23:26.338597 systemd[1]: Started systemd-udevd.service. Feb 8 23:23:26.339539 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:23:26.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:26.349052 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 8 23:23:26.370060 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:23:26.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:26.371838 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:23:26.403941 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:23:26.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:26.440579 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 8 23:23:26.443866 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:23:26.443916 kernel: GPT:9289727 != 19775487 Feb 8 23:23:26.443925 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:23:26.443934 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:23:26.444906 kernel: GPT:9289727 != 19775487 Feb 8 23:23:26.445757 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:23:26.445778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:23:26.447571 kernel: libata version 3.00 loaded. Feb 8 23:23:26.450788 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:23:26.460573 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:23:26.460597 kernel: AES CTR mode by8 optimization enabled Feb 8 23:23:26.463575 kernel: scsi host0: ata_piix Feb 8 23:23:26.463718 kernel: scsi host1: ata_piix Feb 8 23:23:26.463802 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 8 23:23:26.463812 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 8 23:23:26.623589 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 8 23:23:26.623651 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 8 23:23:26.638581 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Feb 8 23:23:26.640872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:23:26.641639 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:23:26.645886 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:23:26.654218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:23:26.657572 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 8 23:23:26.657702 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:23:26.657783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:23:26.659079 systemd[1]: Starting disk-uuid.service... Feb 8 23:23:26.666844 disk-uuid[534]: Primary Header is updated. Feb 8 23:23:26.666844 disk-uuid[534]: Secondary Entries is updated. Feb 8 23:23:26.666844 disk-uuid[534]: Secondary Header is updated. Feb 8 23:23:26.669571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:23:26.671575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:23:26.673568 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 8 23:23:27.673344 disk-uuid[535]: The operation has completed successfully. Feb 8 23:23:27.674613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:23:27.693121 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:23:27.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.693209 systemd[1]: Finished disk-uuid.service. Feb 8 23:23:27.700884 systemd[1]: Starting verity-setup.service... Feb 8 23:23:27.712573 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 8 23:23:27.729337 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:23:27.731683 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:23:27.733260 systemd[1]: Finished verity-setup.service. Feb 8 23:23:27.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.787575 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:23:27.787708 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:23:27.788780 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:23:27.789849 systemd[1]: Starting ignition-setup.service... Feb 8 23:23:27.790896 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:23:27.799744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:23:27.799770 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:23:27.799780 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:23:27.806003 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:23:27.813892 systemd[1]: Finished ignition-setup.service. Feb 8 23:23:27.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.815011 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:23:27.845997 ignition[640]: Ignition 2.14.0 Feb 8 23:23:27.846008 ignition[640]: Stage: fetch-offline Feb 8 23:23:27.846048 ignition[640]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:27.846055 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:27.846136 ignition[640]: parsed url from cmdline: "" Feb 8 23:23:27.846139 ignition[640]: no config URL provided Feb 8 23:23:27.846143 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:23:27.846149 ignition[640]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:23:27.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.852000 audit: BPF prog-id=9 op=LOAD Feb 8 23:23:27.851464 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:23:27.846163 ignition[640]: op(1): [started] loading QEMU firmware config module Feb 8 23:23:27.853661 systemd[1]: Starting systemd-networkd.service... Feb 8 23:23:27.846168 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 8 23:23:27.849997 ignition[640]: op(1): [finished] loading QEMU firmware config module Feb 8 23:23:27.863595 ignition[640]: parsing config with SHA512: 684e3c747b148f4762730c40bb8bce028b1ae1d4cc3e4a342088598e6484b41cdd97ad7fd3296f85810cb5d71d41000a425e5579b22425bcf837e5de50194dc2 Feb 8 23:23:27.880453 unknown[640]: fetched base config from "system" Feb 8 23:23:27.880464 unknown[640]: fetched user config from "qemu" Feb 8 23:23:27.880986 systemd-networkd[714]: lo: Link UP Feb 8 23:23:27.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.881781 ignition[640]: fetch-offline: fetch-offline passed Feb 8 23:23:27.880988 systemd-networkd[714]: lo: Gained carrier Feb 8 23:23:27.881868 ignition[640]: Ignition finished successfully Feb 8 23:23:27.881359 systemd-networkd[714]: Enumeration completed Feb 8 23:23:27.881425 systemd[1]: Started systemd-networkd.service. Feb 8 23:23:27.882440 systemd[1]: Reached target network.target. Feb 8 23:23:27.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.883611 systemd[1]: Starting iscsiuio.service... Feb 8 23:23:27.884642 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:23:27.885400 systemd-networkd[714]: eth0: Link UP Feb 8 23:23:27.885403 systemd-networkd[714]: eth0: Gained carrier Feb 8 23:23:27.886913 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:23:27.888103 systemd[1]: Started iscsiuio.service. Feb 8 23:23:27.889139 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 8 23:23:27.889912 systemd[1]: Starting ignition-kargs.service... Feb 8 23:23:27.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.896266 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:23:27.896266 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:23:27.896266 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:23:27.896266 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:23:27.896266 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:23:27.896266 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:23:27.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.891649 systemd[1]: Starting iscsid.service... Feb 8 23:23:27.900132 ignition[720]: Ignition 2.14.0 Feb 8 23:23:27.895121 systemd[1]: Started iscsid.service. Feb 8 23:23:27.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.900138 ignition[720]: Stage: kargs Feb 8 23:23:27.896441 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:23:27.900241 ignition[720]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:27.902481 systemd[1]: Finished ignition-kargs.service. Feb 8 23:23:27.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.900249 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:27.904424 systemd[1]: Starting ignition-disks.service... Feb 8 23:23:27.901086 ignition[720]: kargs: kargs passed Feb 8 23:23:27.906806 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:23:27.901122 ignition[720]: Ignition finished successfully Feb 8 23:23:27.907645 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:23:27.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.909866 ignition[735]: Ignition 2.14.0 Feb 8 23:23:27.907833 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:23:27.909871 ignition[735]: Stage: disks Feb 8 23:23:27.908845 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:23:27.909948 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:27.909523 systemd[1]: Reached target remote-fs.target. Feb 8 23:23:27.909955 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:27.910766 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:23:27.910719 ignition[735]: disks: disks passed Feb 8 23:23:27.911936 systemd[1]: Finished ignition-disks.service. Feb 8 23:23:27.910747 ignition[735]: Ignition finished successfully Feb 8 23:23:27.913103 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:23:27.914243 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:23:27.915014 systemd[1]: Reached target local-fs.target. Feb 8 23:23:27.916385 systemd[1]: Reached target sysinit.target. Feb 8 23:23:27.931457 systemd-fsck[748]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 8 23:23:27.918077 systemd[1]: Reached target basic.target. Feb 8 23:23:27.919456 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:23:27.920991 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:23:27.935013 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:23:27.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.936627 systemd[1]: Mounting sysroot.mount... Feb 8 23:23:27.942156 systemd[1]: Mounted sysroot.mount. Feb 8 23:23:27.944140 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:23:27.942503 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:23:27.943549 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:23:27.944395 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:23:27.944422 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:23:27.944440 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:23:27.945942 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:23:27.947977 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:23:27.951439 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:23:27.954214 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:23:27.957216 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:23:27.959669 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:23:27.980457 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:23:27.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.981642 systemd[1]: Starting ignition-mount.service... Feb 8 23:23:27.982435 systemd[1]: Starting sysroot-boot.service... Feb 8 23:23:27.987847 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Feb 8 23:23:27.994520 ignition[801]: INFO : Ignition 2.14.0 Feb 8 23:23:27.995325 ignition[801]: INFO : Stage: mount Feb 8 23:23:27.995325 ignition[801]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:27.995325 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:27.998125 ignition[801]: INFO : mount: mount passed Feb 8 23:23:27.998125 ignition[801]: INFO : Ignition finished successfully Feb 8 23:23:27.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:27.996801 systemd[1]: Finished ignition-mount.service. Feb 8 23:23:27.998302 systemd[1]: Finished sysroot-boot.service. Feb 8 23:23:28.227222 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.86 Feb 8 23:23:28.227237 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Feb 8 23:23:28.739015 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:23:28.745277 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Feb 8 23:23:28.745307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:23:28.745318 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:23:28.745836 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:23:28.749127 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:23:28.750572 systemd[1]: Starting ignition-files.service... Feb 8 23:23:28.762991 ignition[829]: INFO : Ignition 2.14.0 Feb 8 23:23:28.762991 ignition[829]: INFO : Stage: files Feb 8 23:23:28.764392 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:28.764392 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:28.764392 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:23:28.767057 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:23:28.767057 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:23:28.767057 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:23:28.767057 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:23:28.767057 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:23:28.766765 unknown[829]: wrote ssh authorized keys file for user: core Feb 8 23:23:28.773427 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:23:28.773427 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:23:29.142062 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:23:29.358146 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:23:29.358146 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:23:29.361582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:23:29.361582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:23:29.667089 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:23:29.738811 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:23:29.740953 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:23:29.740953 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:23:29.740953 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:23:29.749692 systemd-networkd[714]: eth0: Gained IPv6LL Feb 8 23:23:29.807726 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:23:30.095716 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 8 23:23:30.095716 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:23:30.099023 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:23:30.099023 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:23:30.151887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:23:31.186828 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 8 23:23:31.189027 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:23:31.189027 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:23:31.189027 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:23:31.189027 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:23:31.189027 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:23:31.264585 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:23:31.266739 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 8 23:23:31.266739 ignition[829]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:23:31.296669 ignition[829]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:23:31.298083 ignition[829]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 8 23:23:31.299167 ignition[829]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:23:31.300453 ignition[829]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:23:31.300453 ignition[829]: INFO : files: files passed Feb 8 23:23:31.302581 ignition[829]: INFO : Ignition finished successfully Feb 8 23:23:31.304381 systemd[1]: Finished ignition-files.service. Feb 8 23:23:31.308428 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 8 23:23:31.308456 kernel: audit: type=1130 audit(1707434611.303:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.308594 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:23:31.310234 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:23:31.311214 systemd[1]: Starting ignition-quench.service... Feb 8 23:23:31.313718 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:23:31.313804 systemd[1]: Finished ignition-quench.service. Feb 8 23:23:31.319779 kernel: audit: type=1130 audit(1707434611.315:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.319793 kernel: audit: type=1131 audit(1707434611.315:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.323167 initrd-setup-root-after-ignition[855]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 8 23:23:31.325847 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:23:31.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.326587 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:23:31.332164 kernel: audit: type=1130 audit(1707434611.327:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.328145 systemd[1]: Reached target ignition-complete.target. Feb 8 23:23:31.331385 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:23:31.341689 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:23:31.341806 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:23:31.348421 kernel: audit: type=1130 audit(1707434611.343:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.348459 kernel: audit: type=1131 audit(1707434611.343:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.343229 systemd[1]: Reached target initrd-fs.target. Feb 8 23:23:31.348435 systemd[1]: Reached target initrd.target. Feb 8 23:23:31.349111 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:23:31.350239 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:23:31.359403 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:23:31.363375 kernel: audit: type=1130 audit(1707434611.359:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.361002 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:23:31.369274 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:23:31.370070 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:23:31.371290 systemd[1]: Stopped target timers.target. Feb 8 23:23:31.372575 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:23:31.377134 kernel: audit: type=1131 audit(1707434611.373:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.372708 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:23:31.373941 systemd[1]: Stopped target initrd.target. Feb 8 23:23:31.377231 systemd[1]: Stopped target basic.target. Feb 8 23:23:31.378506 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:23:31.379780 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:23:31.381045 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:23:31.382469 systemd[1]: Stopped target remote-fs.target. Feb 8 23:23:31.383748 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:23:31.385110 systemd[1]: Stopped target sysinit.target. Feb 8 23:23:31.386406 systemd[1]: Stopped target local-fs.target. Feb 8 23:23:31.387691 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:23:31.388968 systemd[1]: Stopped target swap.target. Feb 8 23:23:31.394700 kernel: audit: type=1131 audit(1707434611.390:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.390129 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:23:31.390279 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:23:31.399578 kernel: audit: type=1131 audit(1707434611.396:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.391690 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:23:31.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.394733 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:23:31.394864 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:23:31.396317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:23:31.396435 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:23:31.399755 systemd[1]: Stopped target paths.target. Feb 8 23:23:31.400875 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:23:31.404600 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:23:31.405473 systemd[1]: Stopped target slices.target. Feb 8 23:23:31.406788 systemd[1]: Stopped target sockets.target. Feb 8 23:23:31.407980 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:23:31.408079 systemd[1]: Closed iscsid.socket. Feb 8 23:23:31.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.409203 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:23:31.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.409331 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:23:31.410748 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:23:31.410865 systemd[1]: Stopped ignition-files.service. Feb 8 23:23:31.412986 systemd[1]: Stopping ignition-mount.service... Feb 8 23:23:31.414480 systemd[1]: Stopping iscsiuio.service... Feb 8 23:23:31.416456 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:23:31.417605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:23:31.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.417774 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:23:31.419025 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:23:31.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.424465 ignition[870]: INFO : Ignition 2.14.0 Feb 8 23:23:31.424465 ignition[870]: INFO : Stage: umount Feb 8 23:23:31.424465 ignition[870]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:23:31.424465 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:23:31.424465 ignition[870]: INFO : umount: umount passed Feb 8 23:23:31.424465 ignition[870]: INFO : Ignition finished successfully Feb 8 23:23:31.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.419107 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:23:31.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.421864 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:23:31.421962 systemd[1]: Stopped iscsiuio.service. Feb 8 23:23:31.423184 systemd[1]: Stopped target network.target. Feb 8 23:23:31.424750 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:23:31.424779 systemd[1]: Closed iscsiuio.socket. Feb 8 23:23:31.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.425986 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:23:31.426486 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:23:31.427107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:23:31.427181 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:23:31.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.429605 systemd-networkd[714]: eth0: DHCPv6 lease lost Feb 8 23:23:31.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.431259 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:23:31.441000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:23:31.431327 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:23:31.435356 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:23:31.443000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.435670 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:23:31.435743 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:23:31.438074 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:23:31.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.438183 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:23:31.440062 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:23:31.440157 systemd[1]: Stopped ignition-mount.service. Feb 8 23:23:31.441873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:23:31.441907 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:23:31.442801 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:23:31.442845 systemd[1]: Stopped ignition-disks.service. Feb 8 23:23:31.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.443898 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:23:31.443943 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:23:31.444161 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:23:31.444198 systemd[1]: Stopped ignition-setup.service. Feb 8 23:23:31.444300 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:23:31.444336 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:23:31.445246 systemd[1]: Stopping network-cleanup.service... Feb 8 23:23:31.445537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:23:31.445601 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:23:31.445836 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:23:31.445877 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:23:31.447999 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:23:31.448037 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:23:31.448491 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:23:31.450164 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:23:31.455454 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:23:31.456147 systemd[1]: Stopped network-cleanup.service. Feb 8 23:23:31.459323 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:23:31.460144 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:23:31.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.471742 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:23:31.471793 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:23:31.474153 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:23:31.475060 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:23:31.476506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:23:31.477332 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:23:31.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.478799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:23:31.479615 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:23:31.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.480905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:23:31.480949 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:23:31.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.483802 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:23:31.485146 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:23:31.485191 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:23:31.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.487504 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:23:31.488318 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:23:31.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.489761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:23:31.489807 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:23:31.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.493253 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:23:31.495027 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:23:31.496020 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:23:31.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:31.497888 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:23:31.499913 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:23:31.516089 systemd[1]: Switching root. Feb 8 23:23:31.534655 iscsid[721]: iscsid shutting down. Feb 8 23:23:31.535349 systemd-journald[198]: Journal stopped Feb 8 23:23:34.685122 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 8 23:23:34.685170 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:23:34.685182 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:23:34.685192 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:23:34.685201 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:23:34.685211 kernel: SELinux: policy capability open_perms=1 Feb 8 23:23:34.685222 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:23:34.685234 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:23:34.685243 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:23:34.685252 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:23:34.685263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:23:34.685275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:23:34.685286 systemd[1]: Successfully loaded SELinux policy in 37.044ms. Feb 8 23:23:34.685305 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.277ms. Feb 8 23:23:34.685317 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:23:34.685328 systemd[1]: Detected virtualization kvm. Feb 8 23:23:34.685338 systemd[1]: Detected architecture x86-64. Feb 8 23:23:34.685347 systemd[1]: Detected first boot. Feb 8 23:23:34.685357 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:23:34.685368 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:23:34.685378 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:23:34.685388 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:23:34.685400 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:23:34.685411 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:23:34.685422 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:23:34.685432 systemd[1]: Stopped iscsid.service. Feb 8 23:23:34.685442 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:23:34.685463 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:23:34.685473 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:23:34.685483 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:23:34.685496 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:23:34.685506 systemd[1]: Created slice system-getty.slice. Feb 8 23:23:34.685515 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:23:34.685527 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:23:34.685537 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:23:34.685547 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:23:34.685568 systemd[1]: Created slice user.slice. Feb 8 23:23:34.685578 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:23:34.685588 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:23:34.685600 systemd[1]: Set up automount boot.automount. Feb 8 23:23:34.685610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:23:34.685620 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:23:34.685630 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:23:34.685640 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:23:34.685650 systemd[1]: Reached target integritysetup.target. Feb 8 23:23:34.685660 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:23:34.685670 systemd[1]: Reached target remote-fs.target. Feb 8 23:23:34.685681 systemd[1]: Reached target slices.target. Feb 8 23:23:34.685693 systemd[1]: Reached target swap.target. Feb 8 23:23:34.685705 systemd[1]: Reached target torcx.target. Feb 8 23:23:34.685720 systemd[1]: Reached target veritysetup.target. Feb 8 23:23:34.685740 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:23:34.685753 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:23:34.685763 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:23:34.685775 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:23:34.685785 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:23:34.685797 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:23:34.685807 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:23:34.685817 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:23:34.685827 systemd[1]: Mounting media.mount... Feb 8 23:23:34.685837 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:23:34.685847 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:23:34.685857 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:23:34.685867 systemd[1]: Mounting tmp.mount... Feb 8 23:23:34.685877 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:23:34.685888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:23:34.685899 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:23:34.685908 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:23:34.685918 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:23:34.685928 systemd[1]: Starting modprobe@drm.service... Feb 8 23:23:34.685938 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:23:34.685948 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:23:34.685959 systemd[1]: Starting modprobe@loop.service... Feb 8 23:23:34.685972 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:23:34.685986 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:23:34.685998 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:23:34.686010 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:23:34.686023 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:23:34.686035 kernel: loop: module loaded Feb 8 23:23:34.686059 systemd[1]: Stopped systemd-journald.service. Feb 8 23:23:34.686070 kernel: fuse: init (API version 7.34) Feb 8 23:23:34.686081 systemd[1]: Starting systemd-journald.service... Feb 8 23:23:34.686093 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:23:34.686105 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:23:34.686117 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:23:34.686128 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:23:34.686139 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:23:34.686151 systemd[1]: Stopped verity-setup.service. Feb 8 23:23:34.686163 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:23:34.686176 systemd-journald[992]: Journal started Feb 8 23:23:34.686219 systemd-journald[992]: Runtime Journal (/run/log/journal/fdb016f795af4dc2bc207e28261d249f) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:23:31.591000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:23:31.722000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:31.722000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:23:31.722000 audit: BPF prog-id=10 op=LOAD Feb 8 23:23:31.722000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:23:31.722000 audit: BPF prog-id=11 op=LOAD Feb 8 23:23:31.722000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:23:31.754000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:23:31.754000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:31.754000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:31.755000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:23:31.755000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:31.755000 audit: CWD cwd="/" Feb 8 23:23:31.755000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.755000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:31.755000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:23:34.561000 audit: BPF prog-id=12 op=LOAD Feb 8 23:23:34.561000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:23:34.561000 audit: BPF prog-id=13 op=LOAD Feb 8 23:23:34.561000 audit: BPF prog-id=14 op=LOAD Feb 8 23:23:34.561000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:23:34.562000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:23:34.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.575000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:23:34.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.671000 audit: BPF prog-id=15 op=LOAD Feb 8 23:23:34.671000 audit: BPF prog-id=16 op=LOAD Feb 8 23:23:34.671000 audit: BPF prog-id=17 op=LOAD Feb 8 23:23:34.671000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:23:34.671000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:23:34.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.683000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:23:34.683000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe3f0c6ce0 a2=4000 a3=7ffe3f0c6d7c items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:34.683000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:23:31.753440 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:23:34.560526 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:23:31.753732 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:23:34.560537 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:23:31.753751 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:23:34.563404 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:23:31.753781 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:23:31.753791 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:23:31.753828 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:23:31.753843 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:23:31.754021 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:23:31.754058 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:23:31.754069 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:23:31.754368 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:23:31.754397 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:23:31.754413 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:23:31.754425 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:23:31.754439 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:23:31.754451 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:23:34.307737 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:34.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.307983 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:34.688577 systemd[1]: Started systemd-journald.service. Feb 8 23:23:34.308075 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:34.308212 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:23:34.308255 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:23:34.308305 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-02-08T23:23:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:23:34.688856 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:23:34.689463 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:23:34.690061 systemd[1]: Mounted media.mount. Feb 8 23:23:34.690608 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:23:34.691235 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:23:34.691920 systemd[1]: Mounted tmp.mount. Feb 8 23:23:34.692675 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:23:34.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.693525 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:23:34.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.694366 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:23:34.694596 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:23:34.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.695389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:23:34.695579 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:23:34.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.696374 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:23:34.696549 systemd[1]: Finished modprobe@drm.service. Feb 8 23:23:34.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.697301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:23:34.697450 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:23:34.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.698247 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:23:34.698391 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:23:34.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.699278 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:23:34.699465 systemd[1]: Finished modprobe@loop.service. Feb 8 23:23:34.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.700372 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:23:34.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.701231 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:23:34.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.702168 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:23:34.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.703164 systemd[1]: Reached target network-pre.target. Feb 8 23:23:34.704734 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:23:34.706247 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:23:34.706868 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:23:34.708024 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:23:34.709950 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:23:34.712504 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:23:34.714738 systemd-journald[992]: Time spent on flushing to /var/log/journal/fdb016f795af4dc2bc207e28261d249f is 13.049ms for 1101 entries. Feb 8 23:23:34.714738 systemd-journald[992]: System Journal (/var/log/journal/fdb016f795af4dc2bc207e28261d249f) is 8.0M, max 195.6M, 187.6M free. Feb 8 23:23:34.911848 systemd-journald[992]: Received client request to flush runtime journal. Feb 8 23:23:34.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.718351 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:23:34.731125 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:23:34.733145 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:23:34.746189 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:23:34.749208 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:23:34.912994 udevadm[1007]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:23:34.749932 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:23:34.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:34.750672 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:23:34.752980 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:23:34.759706 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:23:34.762578 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:23:34.764138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:23:34.779829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:23:34.900759 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:23:34.901649 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:23:34.912630 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:23:35.333007 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:23:35.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.333000 audit: BPF prog-id=18 op=LOAD Feb 8 23:23:35.333000 audit: BPF prog-id=19 op=LOAD Feb 8 23:23:35.333000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:23:35.333000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:23:35.335114 systemd[1]: Starting systemd-udevd.service... Feb 8 23:23:35.350205 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Feb 8 23:23:35.362262 systemd[1]: Started systemd-udevd.service. Feb 8 23:23:35.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.363000 audit: BPF prog-id=20 op=LOAD Feb 8 23:23:35.367509 systemd[1]: Starting systemd-networkd.service... Feb 8 23:23:35.370000 audit: BPF prog-id=21 op=LOAD Feb 8 23:23:35.370000 audit: BPF prog-id=22 op=LOAD Feb 8 23:23:35.370000 audit: BPF prog-id=23 op=LOAD Feb 8 23:23:35.371431 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:23:35.395596 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:23:35.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.397584 systemd[1]: Started systemd-userdbd.service. Feb 8 23:23:35.416298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:23:35.432572 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:23:35.451580 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:23:35.454000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:23:35.464189 systemd-networkd[1025]: lo: Link UP Feb 8 23:23:35.464543 systemd-networkd[1025]: lo: Gained carrier Feb 8 23:23:35.465177 systemd-networkd[1025]: Enumeration completed Feb 8 23:23:35.465336 systemd[1]: Started systemd-networkd.service. Feb 8 23:23:35.465477 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:23:35.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.466847 systemd-networkd[1025]: eth0: Link UP Feb 8 23:23:35.466933 systemd-networkd[1025]: eth0: Gained carrier Feb 8 23:23:35.454000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dc9152e990 a1=32194 a2=7f4cbe5bbbc5 a3=5 items=108 ppid=1011 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:35.454000 audit: CWD cwd="/" Feb 8 23:23:35.454000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=1 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=2 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=3 name=(null) inode=15548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=4 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=5 name=(null) inode=15549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=6 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=7 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=8 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=9 name=(null) inode=15551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=10 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=11 name=(null) inode=15552 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=12 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=13 name=(null) inode=15553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=14 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=15 name=(null) inode=15554 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=16 name=(null) inode=15550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=17 name=(null) inode=15555 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=18 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=19 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=20 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=21 name=(null) inode=15557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=22 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=23 name=(null) inode=15558 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=24 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=25 name=(null) inode=15559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=26 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=27 name=(null) inode=15560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=28 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=29 name=(null) inode=15561 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=30 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=31 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=32 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=33 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=34 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=35 name=(null) inode=15564 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=36 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=37 name=(null) inode=15565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=38 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=39 name=(null) inode=15566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=40 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=41 name=(null) inode=15567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=42 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=43 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=44 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=45 name=(null) inode=15569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=46 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=47 name=(null) inode=15570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=48 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=49 name=(null) inode=15571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=50 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=51 name=(null) inode=15572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=52 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=53 name=(null) inode=15573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=55 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=56 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=57 name=(null) inode=15575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=58 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=59 name=(null) inode=15576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=60 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=61 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=62 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=63 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=64 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=65 name=(null) inode=15579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=66 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=67 name=(null) inode=15580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=68 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=69 name=(null) inode=15581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=70 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=71 name=(null) inode=15582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=72 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=73 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=74 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=75 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=76 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=77 name=(null) inode=15585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=78 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=79 name=(null) inode=15586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=80 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=81 name=(null) inode=15587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=82 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=83 name=(null) inode=15588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=84 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=85 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=86 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=87 name=(null) inode=15590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=88 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=89 name=(null) inode=15591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=90 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=91 name=(null) inode=15592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=92 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=93 name=(null) inode=15593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=94 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=95 name=(null) inode=15594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=96 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=97 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=98 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=99 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=100 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=101 name=(null) inode=15597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=102 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=103 name=(null) inode=15598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=104 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=105 name=(null) inode=15599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=106 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PATH item=107 name=(null) inode=15600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:23:35.454000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:23:35.484678 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:23:35.489582 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:23:35.493578 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:23:35.498584 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:23:35.537590 kernel: kvm: Nested Virtualization enabled Feb 8 23:23:35.537767 kernel: SVM: kvm: Nested Paging enabled Feb 8 23:23:35.537825 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 8 23:23:35.537851 kernel: SVM: Virtual GIF supported Feb 8 23:23:35.557576 kernel: EDAC MC: Ver: 3.0.0 Feb 8 23:23:35.575981 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:23:35.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.578030 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:23:35.584982 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:23:35.607138 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:23:35.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.607913 systemd[1]: Reached target cryptsetup.target. Feb 8 23:23:35.609355 systemd[1]: Starting lvm2-activation.service... Feb 8 23:23:35.612357 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:23:35.638635 systemd[1]: Finished lvm2-activation.service. Feb 8 23:23:35.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.639715 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:23:35.640616 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:23:35.640649 systemd[1]: Reached target local-fs.target. Feb 8 23:23:35.641465 systemd[1]: Reached target machines.target. Feb 8 23:23:35.643478 systemd[1]: Starting ldconfig.service... Feb 8 23:23:35.644309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:23:35.644360 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:35.645275 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:23:35.646764 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:23:35.649102 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:23:35.650932 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:23:35.650997 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:23:35.652182 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:23:35.653121 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1049 (bootctl) Feb 8 23:23:35.654400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:23:35.659522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:23:35.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.667206 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:23:35.668228 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:23:35.669682 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:23:35.691165 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) Feb 8 23:23:35.691165 systemd-fsck[1058]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:23:35.692811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:23:35.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:35.695796 systemd[1]: Mounting boot.mount... Feb 8 23:23:35.724405 systemd[1]: Mounted boot.mount. Feb 8 23:23:36.078109 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:23:36.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.108205 ldconfig[1048]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:23:36.375709 systemd[1]: Finished ldconfig.service. Feb 8 23:23:36.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.377255 kernel: kauditd_printk_skb: 225 callbacks suppressed Feb 8 23:23:36.377303 kernel: audit: type=1130 audit(1707434616.375:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.382295 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:23:36.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.384534 systemd[1]: Starting audit-rules.service... Feb 8 23:23:36.387586 kernel: audit: type=1130 audit(1707434616.382:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.388680 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:23:36.391000 audit: BPF prog-id=24 op=LOAD Feb 8 23:23:36.390289 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:23:36.393621 kernel: audit: type=1334 audit(1707434616.391:153): prog-id=24 op=LOAD Feb 8 23:23:36.394825 systemd[1]: Starting systemd-resolved.service... Feb 8 23:23:36.396000 audit: BPF prog-id=25 op=LOAD Feb 8 23:23:36.397721 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:23:36.398577 kernel: audit: type=1334 audit(1707434616.396:154): prog-id=25 op=LOAD Feb 8 23:23:36.400109 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:23:36.402203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:23:36.403031 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:23:36.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.404356 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:23:36.407575 kernel: audit: type=1130 audit(1707434616.403:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.408034 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:23:36.410624 kernel: audit: type=1130 audit(1707434616.407:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.410670 kernel: audit: type=1127 audit(1707434616.409:157): pid=1073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.409000 audit[1073]: SYSTEM_BOOT pid=1073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:23:36.413025 augenrules[1082]: No rules Feb 8 23:23:36.412000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:23:36.414597 kernel: audit: type=1305 audit(1707434616.412:158): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:23:36.414696 kernel: audit: type=1300 audit(1707434616.412:158): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd38488150 a2=420 a3=0 items=0 ppid=1062 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:36.412000 audit[1082]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd38488150 a2=420 a3=0 items=0 ppid=1062 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:23:36.412000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:23:36.419479 kernel: audit: type=1327 audit(1707434616.412:158): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:23:36.420243 systemd[1]: Finished audit-rules.service. Feb 8 23:23:36.423668 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:23:36.426198 systemd[1]: Starting systemd-update-done.service... Feb 8 23:23:36.427149 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:23:36.432207 systemd[1]: Finished systemd-update-done.service. Feb 8 23:23:36.463316 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:23:36.464244 systemd[1]: Reached target time-set.target. Feb 8 23:23:37.440218 systemd-timesyncd[1070]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 8 23:23:37.440280 systemd-timesyncd[1070]: Initial clock synchronization to Thu 2024-02-08 23:23:37.440131 UTC. Feb 8 23:23:37.440876 systemd-resolved[1069]: Positive Trust Anchors: Feb 8 23:23:37.440889 systemd-resolved[1069]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:23:37.440917 systemd-resolved[1069]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:23:37.447881 systemd-resolved[1069]: Defaulting to hostname 'linux'. Feb 8 23:23:37.449234 systemd[1]: Started systemd-resolved.service. Feb 8 23:23:37.449968 systemd[1]: Reached target network.target. Feb 8 23:23:37.450576 systemd[1]: Reached target nss-lookup.target. Feb 8 23:23:37.451217 systemd[1]: Reached target sysinit.target. Feb 8 23:23:37.451901 systemd[1]: Started motdgen.path. Feb 8 23:23:37.452479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:23:37.453458 systemd[1]: Started logrotate.timer. Feb 8 23:23:37.454093 systemd[1]: Started mdadm.timer. Feb 8 23:23:37.454666 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:23:37.455339 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:23:37.455359 systemd[1]: Reached target paths.target. Feb 8 23:23:37.455959 systemd[1]: Reached target timers.target. Feb 8 23:23:37.456837 systemd[1]: Listening on dbus.socket. Feb 8 23:23:37.458257 systemd[1]: Starting docker.socket... Feb 8 23:23:37.460585 systemd[1]: Listening on sshd.socket. Feb 8 23:23:37.461233 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:37.461541 systemd[1]: Listening on docker.socket. Feb 8 23:23:37.462174 systemd[1]: Reached target sockets.target. Feb 8 23:23:37.462812 systemd[1]: Reached target basic.target. Feb 8 23:23:37.463438 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:23:37.463458 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:23:37.464118 systemd[1]: Starting containerd.service... Feb 8 23:23:37.465520 systemd[1]: Starting dbus.service... Feb 8 23:23:37.466705 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:23:37.468037 systemd[1]: Starting extend-filesystems.service... Feb 8 23:23:37.468791 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:23:37.469494 systemd[1]: Starting motdgen.service... Feb 8 23:23:37.470690 jq[1093]: false Feb 8 23:23:37.471008 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:23:37.472901 systemd[1]: Starting prepare-critools.service... Feb 8 23:23:37.474413 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:23:37.475849 systemd[1]: Starting sshd-keygen.service... Feb 8 23:23:37.478547 systemd[1]: Starting systemd-logind.service... Feb 8 23:23:37.479268 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:23:37.479304 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:23:37.479656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:23:37.481565 systemd[1]: Starting update-engine.service... Feb 8 23:23:37.483070 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:23:37.487525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:23:37.487671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:23:37.492718 jq[1109]: true Feb 8 23:23:37.488743 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:23:37.488882 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:23:37.495191 dbus-daemon[1092]: [system] SELinux support is enabled Feb 8 23:23:37.495292 systemd[1]: Started dbus.service. Feb 8 23:23:37.496886 tar[1115]: ./ Feb 8 23:23:37.496886 tar[1115]: ./loopback Feb 8 23:23:37.497531 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:23:37.497553 systemd[1]: Reached target system-config.target. Feb 8 23:23:37.498243 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:23:37.498261 systemd[1]: Reached target user-config.target. Feb 8 23:23:37.502399 jq[1117]: true Feb 8 23:23:37.503257 tar[1116]: crictl Feb 8 23:23:37.506129 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:23:37.506260 systemd[1]: Finished motdgen.service. Feb 8 23:23:37.512703 update_engine[1107]: I0208 23:23:37.512600 1107 main.cc:92] Flatcar Update Engine starting Feb 8 23:23:37.515076 systemd[1]: Started update-engine.service. Feb 8 23:23:37.515195 update_engine[1107]: I0208 23:23:37.515180 1107 update_check_scheduler.cc:74] Next update check in 6m12s Feb 8 23:23:37.515935 extend-filesystems[1094]: Found sr0 Feb 8 23:23:37.515935 extend-filesystems[1094]: Found vda Feb 8 23:23:37.515935 extend-filesystems[1094]: Found vda1 Feb 8 23:23:37.515935 extend-filesystems[1094]: Found vda2 Feb 8 23:23:37.524826 extend-filesystems[1094]: Found vda3 Feb 8 23:23:37.524826 extend-filesystems[1094]: Found usr Feb 8 23:23:37.524826 extend-filesystems[1094]: Found vda4 Feb 8 23:23:37.524826 extend-filesystems[1094]: Found vda6 Feb 8 23:23:37.524826 extend-filesystems[1094]: Found vda7 Feb 8 23:23:37.524826 extend-filesystems[1094]: Found vda9 Feb 8 23:23:37.524826 extend-filesystems[1094]: Checking size of /dev/vda9 Feb 8 23:23:37.524826 extend-filesystems[1094]: Resized partition /dev/vda9 Feb 8 23:23:37.536840 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 8 23:23:37.517452 systemd[1]: Started locksmithd.service. Feb 8 23:23:37.542639 env[1119]: time="2024-02-08T23:23:37.522320794Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:23:37.542821 extend-filesystems[1142]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:23:37.529748 systemd-logind[1105]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:23:37.529762 systemd-logind[1105]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:23:37.530051 systemd-logind[1105]: New seat seat0. Feb 8 23:23:37.534858 systemd[1]: Started systemd-logind.service. Feb 8 23:23:37.555348 env[1119]: time="2024-02-08T23:23:37.555305541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:23:37.560390 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 8 23:23:37.582985 extend-filesystems[1142]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:23:37.582985 extend-filesystems[1142]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 8 23:23:37.582985 extend-filesystems[1142]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 8 23:23:37.586174 extend-filesystems[1094]: Resized filesystem in /dev/vda9 Feb 8 23:23:37.585094 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:23:37.587636 env[1119]: time="2024-02-08T23:23:37.584788662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.586135 systemd[1]: Finished extend-filesystems.service. Feb 8 23:23:37.589129 bash[1149]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:23:37.589882 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596405744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596443194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596691500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596711487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596727818Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596741834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.596832865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.597135632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.597272078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:23:37.597544 env[1119]: time="2024-02-08T23:23:37.597298037Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:23:37.597800 env[1119]: time="2024-02-08T23:23:37.597344624Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:23:37.597800 env[1119]: time="2024-02-08T23:23:37.597376824Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:23:37.598289 tar[1115]: ./bandwidth Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602453613Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602477208Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602488619Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602520499Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602534355Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602550295Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602562668Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602574260Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602600789Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602612692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602624754Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602637999Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602722027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:23:37.604016 env[1119]: time="2024-02-08T23:23:37.602793621Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.602992644Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603012592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603025125Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603069769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603081250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603091900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603101659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603111617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603123820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603135833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603146553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603159798Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603247893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603261027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604308 env[1119]: time="2024-02-08T23:23:37.603274563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604604 env[1119]: time="2024-02-08T23:23:37.603287998Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:23:37.604604 env[1119]: time="2024-02-08T23:23:37.603300692Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:23:37.604604 env[1119]: time="2024-02-08T23:23:37.603310590Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:23:37.604604 env[1119]: time="2024-02-08T23:23:37.603325979Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:23:37.604604 env[1119]: time="2024-02-08T23:23:37.603357388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:23:37.604708 env[1119]: time="2024-02-08T23:23:37.603557493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:23:37.604708 env[1119]: time="2024-02-08T23:23:37.603603960Z" level=info msg="Connect containerd service" Feb 8 23:23:37.604708 env[1119]: time="2024-02-08T23:23:37.603637563Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:23:37.605541 env[1119]: time="2024-02-08T23:23:37.605521997Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:23:37.605796 env[1119]: time="2024-02-08T23:23:37.605780682Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:23:37.605951 env[1119]: time="2024-02-08T23:23:37.605936063Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:23:37.606097 env[1119]: time="2024-02-08T23:23:37.606081686Z" level=info msg="containerd successfully booted in 0.084374s" Feb 8 23:23:37.606192 systemd[1]: Started containerd.service. Feb 8 23:23:37.609584 env[1119]: time="2024-02-08T23:23:37.605878635Z" level=info msg="Start subscribing containerd event" Feb 8 23:23:37.609870 env[1119]: time="2024-02-08T23:23:37.609854230Z" level=info msg="Start recovering state" Feb 8 23:23:37.610590 env[1119]: time="2024-02-08T23:23:37.610428356Z" level=info msg="Start event monitor" Feb 8 23:23:37.610915 env[1119]: time="2024-02-08T23:23:37.610865616Z" level=info msg="Start snapshots syncer" Feb 8 23:23:37.610973 env[1119]: time="2024-02-08T23:23:37.610916792Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:23:37.610973 env[1119]: time="2024-02-08T23:23:37.610936970Z" level=info msg="Start streaming server" Feb 8 23:23:37.624675 locksmithd[1135]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:23:37.633633 tar[1115]: ./ptp Feb 8 23:23:37.667952 tar[1115]: ./vlan Feb 8 23:23:37.701011 tar[1115]: ./host-device Feb 8 23:23:37.732815 tar[1115]: ./tuning Feb 8 23:23:37.761101 tar[1115]: ./vrf Feb 8 23:23:37.790781 tar[1115]: ./sbr Feb 8 23:23:37.819962 tar[1115]: ./tap Feb 8 23:23:37.853839 tar[1115]: ./dhcp Feb 8 23:23:37.932865 systemd[1]: Finished prepare-critools.service. Feb 8 23:23:37.938631 tar[1115]: ./static Feb 8 23:23:37.959749 tar[1115]: ./firewall Feb 8 23:23:37.991948 tar[1115]: ./macvlan Feb 8 23:23:38.021215 tar[1115]: ./dummy Feb 8 23:23:38.050091 tar[1115]: ./bridge Feb 8 23:23:38.081669 tar[1115]: ./ipvlan Feb 8 23:23:38.085531 systemd-networkd[1025]: eth0: Gained IPv6LL Feb 8 23:23:38.111799 tar[1115]: ./portmap Feb 8 23:23:38.139596 tar[1115]: ./host-local Feb 8 23:23:38.171017 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:23:38.276660 sshd_keygen[1113]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:23:38.293422 systemd[1]: Finished sshd-keygen.service. Feb 8 23:23:38.295527 systemd[1]: Starting issuegen.service... Feb 8 23:23:38.299977 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:23:38.300088 systemd[1]: Finished issuegen.service. Feb 8 23:23:38.301696 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:23:38.305853 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:23:38.308138 systemd[1]: Started getty@tty1.service. Feb 8 23:23:38.309718 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:23:38.310566 systemd[1]: Reached target getty.target. Feb 8 23:23:38.311241 systemd[1]: Reached target multi-user.target. Feb 8 23:23:38.312825 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:23:38.318830 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:23:38.318964 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:23:38.319850 systemd[1]: Startup finished in 530ms (kernel) + 5.880s (initrd) + 5.790s (userspace) = 12.202s. Feb 8 23:23:46.940940 systemd[1]: Created slice system-sshd.slice. Feb 8 23:23:46.941748 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:38974.service. Feb 8 23:23:46.974233 sshd[1179]: Accepted publickey for core from 10.0.0.1 port 38974 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:46.975659 sshd[1179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:46.984321 systemd-logind[1105]: New session 1 of user core. Feb 8 23:23:46.985333 systemd[1]: Created slice user-500.slice. Feb 8 23:23:46.986536 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:23:46.994775 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:23:46.996350 systemd[1]: Starting user@500.service... Feb 8 23:23:46.998731 (systemd)[1182]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.064078 systemd[1182]: Queued start job for default target default.target. Feb 8 23:23:47.064498 systemd[1182]: Reached target paths.target. Feb 8 23:23:47.064516 systemd[1182]: Reached target sockets.target. Feb 8 23:23:47.064527 systemd[1182]: Reached target timers.target. Feb 8 23:23:47.064538 systemd[1182]: Reached target basic.target. Feb 8 23:23:47.064569 systemd[1182]: Reached target default.target. Feb 8 23:23:47.064599 systemd[1182]: Startup finished in 60ms. Feb 8 23:23:47.064674 systemd[1]: Started user@500.service. Feb 8 23:23:47.065849 systemd[1]: Started session-1.scope. Feb 8 23:23:47.116270 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:38988.service. Feb 8 23:23:47.144975 sshd[1191]: Accepted publickey for core from 10.0.0.1 port 38988 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:47.146215 sshd[1191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.149810 systemd-logind[1105]: New session 2 of user core. Feb 8 23:23:47.150504 systemd[1]: Started session-2.scope. Feb 8 23:23:47.204198 sshd[1191]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:47.206906 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:38988.service: Deactivated successfully. Feb 8 23:23:47.207524 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:23:47.208076 systemd-logind[1105]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:23:47.209124 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:39004.service. Feb 8 23:23:47.209898 systemd-logind[1105]: Removed session 2. Feb 8 23:23:47.238272 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 39004 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:47.239253 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.242790 systemd-logind[1105]: New session 3 of user core. Feb 8 23:23:47.243906 systemd[1]: Started session-3.scope. Feb 8 23:23:47.293008 sshd[1197]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:47.296202 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:39004.service: Deactivated successfully. Feb 8 23:23:47.296852 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:23:47.297433 systemd-logind[1105]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:23:47.298646 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:39006.service. Feb 8 23:23:47.299362 systemd-logind[1105]: Removed session 3. Feb 8 23:23:47.328093 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 39006 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:47.329167 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.332377 systemd-logind[1105]: New session 4 of user core. Feb 8 23:23:47.333386 systemd[1]: Started session-4.scope. Feb 8 23:23:47.386451 sshd[1203]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:47.388606 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:39006.service: Deactivated successfully. Feb 8 23:23:47.389036 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:23:47.389491 systemd-logind[1105]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:23:47.390191 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:39008.service. Feb 8 23:23:47.390886 systemd-logind[1105]: Removed session 4. Feb 8 23:23:47.422008 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 39008 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:23:47.422981 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:23:47.425845 systemd-logind[1105]: New session 5 of user core. Feb 8 23:23:47.426700 systemd[1]: Started session-5.scope. Feb 8 23:23:47.478757 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:23:47.478909 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:23:47.989992 systemd[1]: Reloading. Feb 8 23:23:48.048235 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-02-08T23:23:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:23:48.048770 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2024-02-08T23:23:48Z" level=info msg="torcx already run" Feb 8 23:23:48.109692 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:23:48.109710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:23:48.128237 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:23:48.195825 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:23:48.200793 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:23:48.201161 systemd[1]: Reached target network-online.target. Feb 8 23:23:48.202258 systemd[1]: Started kubelet.service. Feb 8 23:23:48.212422 systemd[1]: Starting coreos-metadata.service... Feb 8 23:23:48.217955 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 8 23:23:48.218071 systemd[1]: Finished coreos-metadata.service. Feb 8 23:23:48.253739 kubelet[1284]: E0208 23:23:48.253617 1284 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:23:48.256013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:23:48.256155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:23:48.390349 systemd[1]: Stopped kubelet.service. Feb 8 23:23:48.406945 systemd[1]: Reloading. Feb 8 23:23:48.468602 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2024-02-08T23:23:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:23:48.468629 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2024-02-08T23:23:48Z" level=info msg="torcx already run" Feb 8 23:23:48.524712 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:23:48.524728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:23:48.542914 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:23:48.616056 systemd[1]: Started kubelet.service. Feb 8 23:23:48.657934 kubelet[1392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:23:48.657934 kubelet[1392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:23:48.657934 kubelet[1392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:23:48.658273 kubelet[1392]: I0208 23:23:48.657968 1392 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:23:48.925024 kubelet[1392]: I0208 23:23:48.924908 1392 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:23:48.925024 kubelet[1392]: I0208 23:23:48.924939 1392 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:23:48.925200 kubelet[1392]: I0208 23:23:48.925164 1392 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:23:48.926887 kubelet[1392]: I0208 23:23:48.926840 1392 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:23:48.930269 kubelet[1392]: I0208 23:23:48.930243 1392 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:23:48.930460 kubelet[1392]: I0208 23:23:48.930443 1392 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:23:48.930525 kubelet[1392]: I0208 23:23:48.930512 1392 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:23:48.930615 kubelet[1392]: I0208 23:23:48.930544 1392 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:23:48.930615 kubelet[1392]: I0208 23:23:48.930554 1392 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:23:48.930660 kubelet[1392]: I0208 23:23:48.930625 1392 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:23:48.934815 kubelet[1392]: I0208 23:23:48.934791 1392 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:23:48.934815 kubelet[1392]: I0208 23:23:48.934816 1392 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:23:48.934890 kubelet[1392]: I0208 23:23:48.934834 1392 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:23:48.934890 kubelet[1392]: I0208 23:23:48.934850 1392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:23:48.935093 kubelet[1392]: E0208 23:23:48.935074 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:48.935093 kubelet[1392]: E0208 23:23:48.935096 1392 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:48.935420 kubelet[1392]: I0208 23:23:48.935388 1392 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:23:48.935783 kubelet[1392]: W0208 23:23:48.935765 1392 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:23:48.936321 kubelet[1392]: I0208 23:23:48.936297 1392 server.go:1168] "Started kubelet" Feb 8 23:23:48.936526 kubelet[1392]: I0208 23:23:48.936495 1392 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:23:48.936725 kubelet[1392]: I0208 23:23:48.936687 1392 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:23:48.937044 kubelet[1392]: E0208 23:23:48.937031 1392 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:23:48.937126 kubelet[1392]: E0208 23:23:48.937108 1392 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:23:48.937562 kubelet[1392]: I0208 23:23:48.937546 1392 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:23:48.938716 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:23:48.938876 kubelet[1392]: I0208 23:23:48.938847 1392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:23:48.939116 kubelet[1392]: I0208 23:23:48.939086 1392 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:23:48.939697 kubelet[1392]: E0208 23:23:48.939675 1392 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.86\" not found" Feb 8 23:23:48.940061 kubelet[1392]: I0208 23:23:48.940039 1392 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:23:48.945696 kubelet[1392]: E0208 23:23:48.945668 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.86\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 8 23:23:48.945891 kubelet[1392]: W0208 23:23:48.945757 1392 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.86" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:23:48.945891 kubelet[1392]: E0208 23:23:48.945781 1392 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.86" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:23:48.945977 kubelet[1392]: W0208 23:23:48.945900 1392 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:23:48.945977 kubelet[1392]: E0208 23:23:48.945921 1392 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:23:48.946141 kubelet[1392]: E0208 23:23:48.946022 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa6f2820b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 936270347, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 936270347, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:48.946476 kubelet[1392]: W0208 23:23:48.946453 1392 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:23:48.946545 kubelet[1392]: E0208 23:23:48.946480 1392 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:23:48.947771 kubelet[1392]: E0208 23:23:48.947344 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa6ff16f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 937094903, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 937094903, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:48.961544 kubelet[1392]: I0208 23:23:48.961502 1392 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:23:48.961544 kubelet[1392]: I0208 23:23:48.961518 1392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:23:48.961544 kubelet[1392]: I0208 23:23:48.961530 1392 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:23:48.962269 kubelet[1392]: E0208 23:23:48.962200 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86ba869", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.86 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 960987241, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 960987241, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:48.962908 kubelet[1392]: E0208 23:23:48.962848 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86bdabd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.86 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961000125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961000125, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:48.963518 kubelet[1392]: E0208 23:23:48.963470 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86beed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.86 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961005265, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961005265, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.041186 kubelet[1392]: I0208 23:23:49.041147 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.86" Feb 8 23:23:49.042874 kubelet[1392]: E0208 23:23:49.042844 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.86" Feb 8 23:23:49.043052 kubelet[1392]: E0208 23:23:49.042851 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86ba869", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.86 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 960987241, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 41079815, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86ba869" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.043719 kubelet[1392]: E0208 23:23:49.043630 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86bdabd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.86 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961000125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 41094853, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86bdabd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.044571 kubelet[1392]: E0208 23:23:49.044517 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86beed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.86 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961005265, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 41099892, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86beed1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.077944 kubelet[1392]: I0208 23:23:49.077898 1392 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:23:49.078925 kubelet[1392]: I0208 23:23:49.078908 1392 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:23:49.078993 kubelet[1392]: I0208 23:23:49.078943 1392 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:23:49.078993 kubelet[1392]: I0208 23:23:49.078970 1392 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:23:49.079061 kubelet[1392]: E0208 23:23:49.079023 1392 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:23:49.080394 kubelet[1392]: W0208 23:23:49.080362 1392 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:23:49.080457 kubelet[1392]: E0208 23:23:49.080400 1392 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:23:49.081048 kubelet[1392]: I0208 23:23:49.081028 1392 policy_none.go:49] "None policy: Start" Feb 8 23:23:49.081708 kubelet[1392]: I0208 23:23:49.081683 1392 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:23:49.081708 kubelet[1392]: I0208 23:23:49.081705 1392 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:23:49.147398 kubelet[1392]: E0208 23:23:49.147348 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.86\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 8 23:23:49.156099 systemd[1]: Created slice kubepods.slice. Feb 8 23:23:49.159442 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:23:49.170420 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:23:49.171316 kubelet[1392]: I0208 23:23:49.171287 1392 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:23:49.171571 kubelet[1392]: I0208 23:23:49.171524 1392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:23:49.172241 kubelet[1392]: E0208 23:23:49.172226 1392 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.86\" not found" Feb 8 23:23:49.179089 kubelet[1392]: E0208 23:23:49.178963 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfb5570097", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 177737367, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 177737367, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.243992 kubelet[1392]: I0208 23:23:49.243958 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.86" Feb 8 23:23:49.245078 kubelet[1392]: E0208 23:23:49.245051 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.86" Feb 8 23:23:49.245336 kubelet[1392]: E0208 23:23:49.245257 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86ba869", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.86 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 960987241, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 243903479, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86ba869" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.246209 kubelet[1392]: E0208 23:23:49.246117 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86bdabd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.86 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961000125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 243914389, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86bdabd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.246950 kubelet[1392]: E0208 23:23:49.246892 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86beed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.86 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961005265, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 243917846, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86beed1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.549648 kubelet[1392]: E0208 23:23:49.549542 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.86\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 8 23:23:49.646786 kubelet[1392]: I0208 23:23:49.646750 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.86" Feb 8 23:23:49.648150 kubelet[1392]: E0208 23:23:49.648116 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.86" Feb 8 23:23:49.648150 kubelet[1392]: E0208 23:23:49.648078 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86ba869", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.86 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 960987241, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 646693540, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86ba869" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.648942 kubelet[1392]: E0208 23:23:49.648862 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86bdabd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.86 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961000125, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 646711895, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86bdabd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.649628 kubelet[1392]: E0208 23:23:49.649580 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86.17b206bfa86beed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.86", UID:"10.0.0.86", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.86 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.86"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 23, 48, 961005265, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 23, 49, 646715431, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.86.17b206bfa86beed1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:23:49.839686 kubelet[1392]: W0208 23:23:49.839576 1392 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:23:49.839686 kubelet[1392]: E0208 23:23:49.839610 1392 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:23:49.927041 kubelet[1392]: I0208 23:23:49.926994 1392 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:23:49.935193 kubelet[1392]: E0208 23:23:49.935146 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:50.292129 kubelet[1392]: E0208 23:23:50.292008 1392 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.86" not found Feb 8 23:23:50.352666 kubelet[1392]: E0208 23:23:50.352634 1392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.86\" not found" node="10.0.0.86" Feb 8 23:23:50.449774 kubelet[1392]: I0208 23:23:50.449730 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.86" Feb 8 23:23:50.452740 kubelet[1392]: I0208 23:23:50.452717 1392 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.86" Feb 8 23:23:50.460717 kubelet[1392]: I0208 23:23:50.460699 1392 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:23:50.460996 env[1119]: time="2024-02-08T23:23:50.460947907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:23:50.461274 kubelet[1392]: I0208 23:23:50.461102 1392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:23:50.627393 sudo[1213]: pam_unix(sudo:session): session closed for user root Feb 8 23:23:50.628695 sshd[1209]: pam_unix(sshd:session): session closed for user core Feb 8 23:23:50.630518 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:39008.service: Deactivated successfully. Feb 8 23:23:50.631157 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:23:50.631684 systemd-logind[1105]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:23:50.632379 systemd-logind[1105]: Removed session 5. Feb 8 23:23:50.935408 kubelet[1392]: E0208 23:23:50.935279 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:50.935408 kubelet[1392]: I0208 23:23:50.935287 1392 apiserver.go:52] "Watching apiserver" Feb 8 23:23:50.937972 kubelet[1392]: I0208 23:23:50.937954 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:23:50.938070 kubelet[1392]: I0208 23:23:50.938057 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:23:50.940667 kubelet[1392]: I0208 23:23:50.940636 1392 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:23:50.942836 systemd[1]: Created slice kubepods-burstable-podf16fc3b6_f120_4dc7_a106_2998161b5be3.slice. Feb 8 23:23:50.950249 kubelet[1392]: I0208 23:23:50.950229 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-kernel\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950343 kubelet[1392]: I0208 23:23:50.950278 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-run\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950343 kubelet[1392]: I0208 23:23:50.950305 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-bpf-maps\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950343 kubelet[1392]: I0208 23:23:50.950329 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-etc-cni-netd\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950435 kubelet[1392]: I0208 23:23:50.950357 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-xtables-lock\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950435 kubelet[1392]: I0208 23:23:50.950396 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f16fc3b6-f120-4dc7-a106-2998161b5be3-clustermesh-secrets\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950435 kubelet[1392]: I0208 23:23:50.950427 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-cgroup\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950508 kubelet[1392]: I0208 23:23:50.950449 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-lib-modules\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950508 kubelet[1392]: I0208 23:23:50.950475 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5xh\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-kube-api-access-hl5xh\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950508 kubelet[1392]: I0208 23:23:50.950507 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5110e93-32d8-491c-8379-fab65f2d2e8f-xtables-lock\") pod \"kube-proxy-r4v6g\" (UID: \"a5110e93-32d8-491c-8379-fab65f2d2e8f\") " pod="kube-system/kube-proxy-r4v6g" Feb 8 23:23:50.950446 systemd[1]: Created slice kubepods-besteffort-poda5110e93_32d8_491c_8379_fab65f2d2e8f.slice. Feb 8 23:23:50.950622 kubelet[1392]: I0208 23:23:50.950530 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5110e93-32d8-491c-8379-fab65f2d2e8f-lib-modules\") pod \"kube-proxy-r4v6g\" (UID: \"a5110e93-32d8-491c-8379-fab65f2d2e8f\") " pod="kube-system/kube-proxy-r4v6g" Feb 8 23:23:50.950622 kubelet[1392]: I0208 23:23:50.950557 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-config-path\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950622 kubelet[1392]: I0208 23:23:50.950578 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdwc\" (UniqueName: \"kubernetes.io/projected/a5110e93-32d8-491c-8379-fab65f2d2e8f-kube-api-access-rtdwc\") pod \"kube-proxy-r4v6g\" (UID: \"a5110e93-32d8-491c-8379-fab65f2d2e8f\") " pod="kube-system/kube-proxy-r4v6g" Feb 8 23:23:50.950622 kubelet[1392]: I0208 23:23:50.950597 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-hostproc\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950622 kubelet[1392]: I0208 23:23:50.950615 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cni-path\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950732 kubelet[1392]: I0208 23:23:50.950634 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-net\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950732 kubelet[1392]: I0208 23:23:50.950653 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-hubble-tls\") pod \"cilium-xdj6p\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " pod="kube-system/cilium-xdj6p" Feb 8 23:23:50.950732 kubelet[1392]: I0208 23:23:50.950676 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5110e93-32d8-491c-8379-fab65f2d2e8f-kube-proxy\") pod \"kube-proxy-r4v6g\" (UID: \"a5110e93-32d8-491c-8379-fab65f2d2e8f\") " pod="kube-system/kube-proxy-r4v6g" Feb 8 23:23:50.950732 kubelet[1392]: I0208 23:23:50.950685 1392 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:23:51.249673 kubelet[1392]: E0208 23:23:51.249549 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:51.250276 env[1119]: time="2024-02-08T23:23:51.250235070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdj6p,Uid:f16fc3b6-f120-4dc7-a106-2998161b5be3,Namespace:kube-system,Attempt:0,}" Feb 8 23:23:51.260531 kubelet[1392]: E0208 23:23:51.260513 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:51.260860 env[1119]: time="2024-02-08T23:23:51.260831689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4v6g,Uid:a5110e93-32d8-491c-8379-fab65f2d2e8f,Namespace:kube-system,Attempt:0,}" Feb 8 23:23:51.936244 kubelet[1392]: E0208 23:23:51.936194 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:51.943534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134097156.mount: Deactivated successfully. Feb 8 23:23:51.949322 env[1119]: time="2024-02-08T23:23:51.949285710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.951614 env[1119]: time="2024-02-08T23:23:51.951584159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.952502 env[1119]: time="2024-02-08T23:23:51.952455814Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.954101 env[1119]: time="2024-02-08T23:23:51.954070141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.955445 env[1119]: time="2024-02-08T23:23:51.955426243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.957092 env[1119]: time="2024-02-08T23:23:51.957062371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.959093 env[1119]: time="2024-02-08T23:23:51.959070216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.959853 env[1119]: time="2024-02-08T23:23:51.959811826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:51.975281 env[1119]: time="2024-02-08T23:23:51.975198597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:51.975281 env[1119]: time="2024-02-08T23:23:51.975243831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:51.975281 env[1119]: time="2024-02-08T23:23:51.975255654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:51.975870 env[1119]: time="2024-02-08T23:23:51.975818749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e pid=1452 runtime=io.containerd.runc.v2 Feb 8 23:23:51.978314 env[1119]: time="2024-02-08T23:23:51.978250669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:23:51.978314 env[1119]: time="2024-02-08T23:23:51.978288961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:23:51.978532 env[1119]: time="2024-02-08T23:23:51.978484147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:23:51.978814 env[1119]: time="2024-02-08T23:23:51.978754704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae60e0687eb229a9fb88539c265af009b15ae0a687a0b06659eee85789d27e02 pid=1463 runtime=io.containerd.runc.v2 Feb 8 23:23:51.989472 systemd[1]: Started cri-containerd-e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e.scope. Feb 8 23:23:51.994487 systemd[1]: Started cri-containerd-ae60e0687eb229a9fb88539c265af009b15ae0a687a0b06659eee85789d27e02.scope. Feb 8 23:23:52.014780 env[1119]: time="2024-02-08T23:23:52.014721963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4v6g,Uid:a5110e93-32d8-491c-8379-fab65f2d2e8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae60e0687eb229a9fb88539c265af009b15ae0a687a0b06659eee85789d27e02\"" Feb 8 23:23:52.016201 kubelet[1392]: E0208 23:23:52.016181 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:52.017018 env[1119]: time="2024-02-08T23:23:52.016982271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdj6p,Uid:f16fc3b6-f120-4dc7-a106-2998161b5be3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\"" Feb 8 23:23:52.017500 env[1119]: time="2024-02-08T23:23:52.017481547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 8 23:23:52.017788 kubelet[1392]: E0208 23:23:52.017772 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:52.936579 kubelet[1392]: E0208 23:23:52.936522 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:53.213955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240591811.mount: Deactivated successfully. Feb 8 23:23:53.752810 env[1119]: time="2024-02-08T23:23:53.752749434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:53.754544 env[1119]: time="2024-02-08T23:23:53.754503613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:53.755957 env[1119]: time="2024-02-08T23:23:53.755906964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:53.757092 env[1119]: time="2024-02-08T23:23:53.757059696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:23:53.757341 env[1119]: time="2024-02-08T23:23:53.757287833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 8 23:23:53.758162 env[1119]: time="2024-02-08T23:23:53.758104224Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:23:53.759412 env[1119]: time="2024-02-08T23:23:53.759355230Z" level=info msg="CreateContainer within sandbox \"ae60e0687eb229a9fb88539c265af009b15ae0a687a0b06659eee85789d27e02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:23:53.893005 env[1119]: time="2024-02-08T23:23:53.892945481Z" level=info msg="CreateContainer within sandbox \"ae60e0687eb229a9fb88539c265af009b15ae0a687a0b06659eee85789d27e02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c2d4c3e03beac1d849bfa57fc23f41a2ef2eff0e3055367923944f3a7ae8c2b\"" Feb 8 23:23:53.893657 env[1119]: time="2024-02-08T23:23:53.893631448Z" level=info msg="StartContainer for \"2c2d4c3e03beac1d849bfa57fc23f41a2ef2eff0e3055367923944f3a7ae8c2b\"" Feb 8 23:23:53.910091 systemd[1]: Started cri-containerd-2c2d4c3e03beac1d849bfa57fc23f41a2ef2eff0e3055367923944f3a7ae8c2b.scope. Feb 8 23:23:53.933083 env[1119]: time="2024-02-08T23:23:53.933043104Z" level=info msg="StartContainer for \"2c2d4c3e03beac1d849bfa57fc23f41a2ef2eff0e3055367923944f3a7ae8c2b\" returns successfully" Feb 8 23:23:53.937516 kubelet[1392]: E0208 23:23:53.937489 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:54.090187 kubelet[1392]: E0208 23:23:54.090083 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:54.097862 kubelet[1392]: I0208 23:23:54.097828 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r4v6g" podStartSLOduration=2.357052783 podCreationTimestamp="2024-02-08 23:23:50 +0000 UTC" firstStartedPulling="2024-02-08 23:23:52.017062942 +0000 UTC m=+3.397862672" lastFinishedPulling="2024-02-08 23:23:53.75779786 +0000 UTC m=+5.138597590" observedRunningTime="2024-02-08 23:23:54.097639453 +0000 UTC m=+5.478439213" watchObservedRunningTime="2024-02-08 23:23:54.097787701 +0000 UTC m=+5.478587431" Feb 8 23:23:54.938579 kubelet[1392]: E0208 23:23:54.938536 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:55.090836 kubelet[1392]: E0208 23:23:55.090810 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:23:55.939511 kubelet[1392]: E0208 23:23:55.939435 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:56.940461 kubelet[1392]: E0208 23:23:56.940400 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:57.941171 kubelet[1392]: E0208 23:23:57.941133 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:58.573693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851911030.mount: Deactivated successfully. Feb 8 23:23:58.942256 kubelet[1392]: E0208 23:23:58.942106 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:23:59.943178 kubelet[1392]: E0208 23:23:59.943122 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:00.943795 kubelet[1392]: E0208 23:24:00.943730 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:01.822966 env[1119]: time="2024-02-08T23:24:01.822892951Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:01.824593 env[1119]: time="2024-02-08T23:24:01.824556600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:01.825989 env[1119]: time="2024-02-08T23:24:01.825959080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:01.826560 env[1119]: time="2024-02-08T23:24:01.826515393Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:24:01.828298 env[1119]: time="2024-02-08T23:24:01.828267548Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:01.838588 env[1119]: time="2024-02-08T23:24:01.838546531Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\"" Feb 8 23:24:01.839036 env[1119]: time="2024-02-08T23:24:01.838986817Z" level=info msg="StartContainer for \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\"" Feb 8 23:24:01.856411 systemd[1]: Started cri-containerd-04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913.scope. Feb 8 23:24:01.876552 env[1119]: time="2024-02-08T23:24:01.876512257Z" level=info msg="StartContainer for \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\" returns successfully" Feb 8 23:24:01.882649 systemd[1]: cri-containerd-04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913.scope: Deactivated successfully. Feb 8 23:24:01.944655 kubelet[1392]: E0208 23:24:01.944624 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:02.100332 kubelet[1392]: E0208 23:24:02.099908 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:02.522989 env[1119]: time="2024-02-08T23:24:02.522861532Z" level=info msg="shim disconnected" id=04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913 Feb 8 23:24:02.522989 env[1119]: time="2024-02-08T23:24:02.522903671Z" level=warning msg="cleaning up after shim disconnected" id=04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913 namespace=k8s.io Feb 8 23:24:02.522989 env[1119]: time="2024-02-08T23:24:02.522912147Z" level=info msg="cleaning up dead shim" Feb 8 23:24:02.528238 env[1119]: time="2024-02-08T23:24:02.528180204Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1739 runtime=io.containerd.runc.v2\n" Feb 8 23:24:02.835133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913-rootfs.mount: Deactivated successfully. Feb 8 23:24:02.944831 kubelet[1392]: E0208 23:24:02.944763 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:03.102408 kubelet[1392]: E0208 23:24:03.102299 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:03.103919 env[1119]: time="2024-02-08T23:24:03.103881644Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:24:03.122274 env[1119]: time="2024-02-08T23:24:03.122215520Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\"" Feb 8 23:24:03.122688 env[1119]: time="2024-02-08T23:24:03.122663390Z" level=info msg="StartContainer for \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\"" Feb 8 23:24:03.137734 systemd[1]: Started cri-containerd-41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76.scope. Feb 8 23:24:03.158598 env[1119]: time="2024-02-08T23:24:03.158539498Z" level=info msg="StartContainer for \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\" returns successfully" Feb 8 23:24:03.166337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:24:03.166631 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:24:03.166846 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:24:03.168359 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:24:03.168704 systemd[1]: cri-containerd-41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76.scope: Deactivated successfully. Feb 8 23:24:03.175825 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:24:03.188756 env[1119]: time="2024-02-08T23:24:03.188705339Z" level=info msg="shim disconnected" id=41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76 Feb 8 23:24:03.188756 env[1119]: time="2024-02-08T23:24:03.188755783Z" level=warning msg="cleaning up after shim disconnected" id=41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76 namespace=k8s.io Feb 8 23:24:03.188933 env[1119]: time="2024-02-08T23:24:03.188764720Z" level=info msg="cleaning up dead shim" Feb 8 23:24:03.194816 env[1119]: time="2024-02-08T23:24:03.194757567Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1803 runtime=io.containerd.runc.v2\n" Feb 8 23:24:03.835073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76-rootfs.mount: Deactivated successfully. Feb 8 23:24:03.945247 kubelet[1392]: E0208 23:24:03.945220 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:04.104889 kubelet[1392]: E0208 23:24:04.104810 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:04.109225 env[1119]: time="2024-02-08T23:24:04.109189326Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:24:04.123326 env[1119]: time="2024-02-08T23:24:04.123286638Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\"" Feb 8 23:24:04.123762 env[1119]: time="2024-02-08T23:24:04.123734298Z" level=info msg="StartContainer for \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\"" Feb 8 23:24:04.137526 systemd[1]: Started cri-containerd-29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b.scope. Feb 8 23:24:04.159872 env[1119]: time="2024-02-08T23:24:04.159806503Z" level=info msg="StartContainer for \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\" returns successfully" Feb 8 23:24:04.160425 systemd[1]: cri-containerd-29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b.scope: Deactivated successfully. Feb 8 23:24:04.177786 env[1119]: time="2024-02-08T23:24:04.177742693Z" level=info msg="shim disconnected" id=29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b Feb 8 23:24:04.177786 env[1119]: time="2024-02-08T23:24:04.177781536Z" level=warning msg="cleaning up after shim disconnected" id=29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b namespace=k8s.io Feb 8 23:24:04.177786 env[1119]: time="2024-02-08T23:24:04.177789191Z" level=info msg="cleaning up dead shim" Feb 8 23:24:04.184163 env[1119]: time="2024-02-08T23:24:04.184113889Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1859 runtime=io.containerd.runc.v2\n" Feb 8 23:24:04.835046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b-rootfs.mount: Deactivated successfully. Feb 8 23:24:04.946150 kubelet[1392]: E0208 23:24:04.946121 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:05.108070 kubelet[1392]: E0208 23:24:05.107969 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:05.109576 env[1119]: time="2024-02-08T23:24:05.109540514Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:24:05.124519 env[1119]: time="2024-02-08T23:24:05.124469286Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\"" Feb 8 23:24:05.125046 env[1119]: time="2024-02-08T23:24:05.125002596Z" level=info msg="StartContainer for \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\"" Feb 8 23:24:05.144305 systemd[1]: Started cri-containerd-7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd.scope. Feb 8 23:24:05.163823 systemd[1]: cri-containerd-7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd.scope: Deactivated successfully. Feb 8 23:24:05.167085 env[1119]: time="2024-02-08T23:24:05.167037020Z" level=info msg="StartContainer for \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\" returns successfully" Feb 8 23:24:05.186417 env[1119]: time="2024-02-08T23:24:05.186362485Z" level=info msg="shim disconnected" id=7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd Feb 8 23:24:05.186417 env[1119]: time="2024-02-08T23:24:05.186414112Z" level=warning msg="cleaning up after shim disconnected" id=7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd namespace=k8s.io Feb 8 23:24:05.186417 env[1119]: time="2024-02-08T23:24:05.186422488Z" level=info msg="cleaning up dead shim" Feb 8 23:24:05.193205 env[1119]: time="2024-02-08T23:24:05.193144401Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1913 runtime=io.containerd.runc.v2\n" Feb 8 23:24:05.835027 systemd[1]: run-containerd-runc-k8s.io-7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd-runc.Oepwc0.mount: Deactivated successfully. Feb 8 23:24:05.835130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd-rootfs.mount: Deactivated successfully. Feb 8 23:24:05.947088 kubelet[1392]: E0208 23:24:05.947021 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:06.111776 kubelet[1392]: E0208 23:24:06.111661 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:06.113680 env[1119]: time="2024-02-08T23:24:06.113631675Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:24:06.128473 env[1119]: time="2024-02-08T23:24:06.128412088Z" level=info msg="CreateContainer within sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\"" Feb 8 23:24:06.128857 env[1119]: time="2024-02-08T23:24:06.128828920Z" level=info msg="StartContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\"" Feb 8 23:24:06.147998 systemd[1]: Started cri-containerd-70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3.scope. Feb 8 23:24:06.175188 env[1119]: time="2024-02-08T23:24:06.175141816Z" level=info msg="StartContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" returns successfully" Feb 8 23:24:06.289852 kubelet[1392]: I0208 23:24:06.289823 1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:24:06.502395 kernel: Initializing XFRM netlink socket Feb 8 23:24:06.947506 kubelet[1392]: E0208 23:24:06.947460 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:07.116577 kubelet[1392]: E0208 23:24:07.116550 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:07.127497 kubelet[1392]: I0208 23:24:07.127456 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xdj6p" podStartSLOduration=7.318746177 podCreationTimestamp="2024-02-08 23:23:50 +0000 UTC" firstStartedPulling="2024-02-08 23:23:52.018125635 +0000 UTC m=+3.398925365" lastFinishedPulling="2024-02-08 23:24:01.82679117 +0000 UTC m=+13.207590910" observedRunningTime="2024-02-08 23:24:07.127253886 +0000 UTC m=+18.508053636" watchObservedRunningTime="2024-02-08 23:24:07.127411722 +0000 UTC m=+18.508211442" Feb 8 23:24:07.948415 kubelet[1392]: E0208 23:24:07.948342 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:08.118192 kubelet[1392]: E0208 23:24:08.117579 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:08.124165 systemd-networkd[1025]: cilium_host: Link UP Feb 8 23:24:08.124309 systemd-networkd[1025]: cilium_net: Link UP Feb 8 23:24:08.124786 systemd-networkd[1025]: cilium_net: Gained carrier Feb 8 23:24:08.125509 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:24:08.125585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:24:08.125677 systemd-networkd[1025]: cilium_host: Gained carrier Feb 8 23:24:08.186446 systemd-networkd[1025]: cilium_vxlan: Link UP Feb 8 23:24:08.186457 systemd-networkd[1025]: cilium_vxlan: Gained carrier Feb 8 23:24:08.354393 kernel: NET: Registered PF_ALG protocol family Feb 8 23:24:08.806781 systemd-networkd[1025]: cilium_net: Gained IPv6LL Feb 8 23:24:08.810124 systemd-networkd[1025]: lxc_health: Link UP Feb 8 23:24:08.817999 systemd-networkd[1025]: lxc_health: Gained carrier Feb 8 23:24:08.818466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:24:08.933746 systemd-networkd[1025]: cilium_host: Gained IPv6LL Feb 8 23:24:08.935326 kubelet[1392]: E0208 23:24:08.935299 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:08.948518 kubelet[1392]: E0208 23:24:08.948490 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:09.119274 kubelet[1392]: E0208 23:24:09.119175 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:09.317548 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL Feb 8 23:24:09.949178 kubelet[1392]: E0208 23:24:09.949131 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:10.085500 systemd-networkd[1025]: lxc_health: Gained IPv6LL Feb 8 23:24:10.120699 kubelet[1392]: E0208 23:24:10.120682 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:10.950208 kubelet[1392]: E0208 23:24:10.950176 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:11.122043 kubelet[1392]: E0208 23:24:11.122014 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:11.950625 kubelet[1392]: E0208 23:24:11.950569 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:12.123026 kubelet[1392]: E0208 23:24:12.122991 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:12.130270 kubelet[1392]: I0208 23:24:12.130238 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:12.135138 systemd[1]: Created slice kubepods-besteffort-podf58c151c_b213_418b_bf06_3531e61e8a89.slice. Feb 8 23:24:12.169498 kubelet[1392]: I0208 23:24:12.169452 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdcz8\" (UniqueName: \"kubernetes.io/projected/f58c151c-b213-418b-bf06-3531e61e8a89-kube-api-access-wdcz8\") pod \"nginx-deployment-845c78c8b9-6rmlm\" (UID: \"f58c151c-b213-418b-bf06-3531e61e8a89\") " pod="default/nginx-deployment-845c78c8b9-6rmlm" Feb 8 23:24:12.437465 env[1119]: time="2024-02-08T23:24:12.437313966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-6rmlm,Uid:f58c151c-b213-418b-bf06-3531e61e8a89,Namespace:default,Attempt:0,}" Feb 8 23:24:12.464021 systemd-networkd[1025]: lxc6db37a57f2f2: Link UP Feb 8 23:24:12.470416 kernel: eth0: renamed from tmpcf69a Feb 8 23:24:12.477509 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:24:12.477556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6db37a57f2f2: link becomes ready Feb 8 23:24:12.477329 systemd-networkd[1025]: lxc6db37a57f2f2: Gained carrier Feb 8 23:24:12.951349 kubelet[1392]: E0208 23:24:12.951286 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:12.997632 env[1119]: time="2024-02-08T23:24:12.997568367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:12.997632 env[1119]: time="2024-02-08T23:24:12.997608414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:12.997632 env[1119]: time="2024-02-08T23:24:12.997618774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:12.997939 env[1119]: time="2024-02-08T23:24:12.997876518Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf69a3f474fa5dd8c3baaf391358def9dcf16c6908e5b131e761a4fb4d772319 pid=2469 runtime=io.containerd.runc.v2 Feb 8 23:24:13.009260 systemd[1]: Started cri-containerd-cf69a3f474fa5dd8c3baaf391358def9dcf16c6908e5b131e761a4fb4d772319.scope. Feb 8 23:24:13.019996 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:24:13.041192 env[1119]: time="2024-02-08T23:24:13.041134714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-6rmlm,Uid:f58c151c-b213-418b-bf06-3531e61e8a89,Namespace:default,Attempt:0,} returns sandbox id \"cf69a3f474fa5dd8c3baaf391358def9dcf16c6908e5b131e761a4fb4d772319\"" Feb 8 23:24:13.042704 env[1119]: time="2024-02-08T23:24:13.042670025Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:24:13.951923 kubelet[1392]: E0208 23:24:13.951876 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:14.053613 systemd-networkd[1025]: lxc6db37a57f2f2: Gained IPv6LL Feb 8 23:24:14.952551 kubelet[1392]: E0208 23:24:14.952504 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:15.953147 kubelet[1392]: E0208 23:24:15.953096 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:16.433469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052310217.mount: Deactivated successfully. Feb 8 23:24:16.954101 kubelet[1392]: E0208 23:24:16.954054 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:17.258810 env[1119]: time="2024-02-08T23:24:17.258706105Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:17.260670 env[1119]: time="2024-02-08T23:24:17.260623891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:17.262321 env[1119]: time="2024-02-08T23:24:17.262299956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:17.263905 env[1119]: time="2024-02-08T23:24:17.263879307Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:17.264467 env[1119]: time="2024-02-08T23:24:17.264442281Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:24:17.265896 env[1119]: time="2024-02-08T23:24:17.265864111Z" level=info msg="CreateContainer within sandbox \"cf69a3f474fa5dd8c3baaf391358def9dcf16c6908e5b131e761a4fb4d772319\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:24:17.284924 env[1119]: time="2024-02-08T23:24:17.284883009Z" level=info msg="CreateContainer within sandbox \"cf69a3f474fa5dd8c3baaf391358def9dcf16c6908e5b131e761a4fb4d772319\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"71e48992e9b96bfed5c01bc1192767c8a01c53c7534ed659fa6400785d5446ab\"" Feb 8 23:24:17.285442 env[1119]: time="2024-02-08T23:24:17.285336714Z" level=info msg="StartContainer for \"71e48992e9b96bfed5c01bc1192767c8a01c53c7534ed659fa6400785d5446ab\"" Feb 8 23:24:17.302051 systemd[1]: Started cri-containerd-71e48992e9b96bfed5c01bc1192767c8a01c53c7534ed659fa6400785d5446ab.scope. Feb 8 23:24:17.326245 env[1119]: time="2024-02-08T23:24:17.326201269Z" level=info msg="StartContainer for \"71e48992e9b96bfed5c01bc1192767c8a01c53c7534ed659fa6400785d5446ab\" returns successfully" Feb 8 23:24:17.433826 systemd[1]: run-containerd-runc-k8s.io-71e48992e9b96bfed5c01bc1192767c8a01c53c7534ed659fa6400785d5446ab-runc.GvnKJW.mount: Deactivated successfully. Feb 8 23:24:17.955200 kubelet[1392]: E0208 23:24:17.955142 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:18.143411 kubelet[1392]: I0208 23:24:18.143361 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-6rmlm" podStartSLOduration=1.9210410470000001 podCreationTimestamp="2024-02-08 23:24:12 +0000 UTC" firstStartedPulling="2024-02-08 23:24:13.042392083 +0000 UTC m=+24.423191813" lastFinishedPulling="2024-02-08 23:24:17.264676306 +0000 UTC m=+28.645476046" observedRunningTime="2024-02-08 23:24:18.143098357 +0000 UTC m=+29.523898087" watchObservedRunningTime="2024-02-08 23:24:18.14332528 +0000 UTC m=+29.524125010" Feb 8 23:24:18.955770 kubelet[1392]: E0208 23:24:18.955728 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:19.962234 kubelet[1392]: E0208 23:24:19.959307 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:20.959660 kubelet[1392]: E0208 23:24:20.959574 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:21.960389 kubelet[1392]: E0208 23:24:21.960334 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:22.961249 kubelet[1392]: E0208 23:24:22.961201 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:23.124051 update_engine[1107]: I0208 23:24:23.124004 1107 update_attempter.cc:509] Updating boot flags... Feb 8 23:24:23.596050 kubelet[1392]: I0208 23:24:23.596011 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:23.600304 systemd[1]: Created slice kubepods-besteffort-pod9fe59855_a823_44b4_ba80_105b8cae7a26.slice. Feb 8 23:24:23.633514 kubelet[1392]: I0208 23:24:23.633482 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9fe59855-a823-44b4-ba80-105b8cae7a26-data\") pod \"nfs-server-provisioner-0\" (UID: \"9fe59855-a823-44b4-ba80-105b8cae7a26\") " pod="default/nfs-server-provisioner-0" Feb 8 23:24:23.633514 kubelet[1392]: I0208 23:24:23.633520 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrhz\" (UniqueName: \"kubernetes.io/projected/9fe59855-a823-44b4-ba80-105b8cae7a26-kube-api-access-pvrhz\") pod \"nfs-server-provisioner-0\" (UID: \"9fe59855-a823-44b4-ba80-105b8cae7a26\") " pod="default/nfs-server-provisioner-0" Feb 8 23:24:23.903319 env[1119]: time="2024-02-08T23:24:23.903178777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9fe59855-a823-44b4-ba80-105b8cae7a26,Namespace:default,Attempt:0,}" Feb 8 23:24:23.961791 kubelet[1392]: E0208 23:24:23.961744 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:24.241786 systemd-networkd[1025]: lxce7fec27a7c99: Link UP Feb 8 23:24:24.247402 kernel: eth0: renamed from tmpd9fec Feb 8 23:24:24.253438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:24:24.253484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7fec27a7c99: link becomes ready Feb 8 23:24:24.253512 systemd-networkd[1025]: lxce7fec27a7c99: Gained carrier Feb 8 23:24:24.462999 env[1119]: time="2024-02-08T23:24:24.462938255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:24.462999 env[1119]: time="2024-02-08T23:24:24.462970566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:24.462999 env[1119]: time="2024-02-08T23:24:24.462979964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:24.463357 env[1119]: time="2024-02-08T23:24:24.463075735Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9fecf7d78bf3abcc7bda4a0e443fe6a032db6a3b5911c89d2ff0c2e5955664c pid=2608 runtime=io.containerd.runc.v2 Feb 8 23:24:24.472867 systemd[1]: Started cri-containerd-d9fecf7d78bf3abcc7bda4a0e443fe6a032db6a3b5911c89d2ff0c2e5955664c.scope. Feb 8 23:24:24.482036 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:24:24.501966 env[1119]: time="2024-02-08T23:24:24.501860994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9fe59855-a823-44b4-ba80-105b8cae7a26,Namespace:default,Attempt:0,} returns sandbox id \"d9fecf7d78bf3abcc7bda4a0e443fe6a032db6a3b5911c89d2ff0c2e5955664c\"" Feb 8 23:24:24.503344 env[1119]: time="2024-02-08T23:24:24.503319107Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:24:24.962644 kubelet[1392]: E0208 23:24:24.962602 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:25.963161 kubelet[1392]: E0208 23:24:25.963111 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:26.021584 systemd-networkd[1025]: lxce7fec27a7c99: Gained IPv6LL Feb 8 23:24:26.963956 kubelet[1392]: E0208 23:24:26.963902 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:27.964389 kubelet[1392]: E0208 23:24:27.964315 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:28.801446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250146455.mount: Deactivated successfully. Feb 8 23:24:28.935726 kubelet[1392]: E0208 23:24:28.935669 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:28.965193 kubelet[1392]: E0208 23:24:28.965132 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:29.965879 kubelet[1392]: E0208 23:24:29.965826 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:30.966528 kubelet[1392]: E0208 23:24:30.966491 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:31.723258 env[1119]: time="2024-02-08T23:24:31.723206459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:31.725228 env[1119]: time="2024-02-08T23:24:31.725179523Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:31.727329 env[1119]: time="2024-02-08T23:24:31.727306468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:31.729034 env[1119]: time="2024-02-08T23:24:31.729007209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:31.729746 env[1119]: time="2024-02-08T23:24:31.729713723Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:24:31.731403 env[1119]: time="2024-02-08T23:24:31.731378886Z" level=info msg="CreateContainer within sandbox \"d9fecf7d78bf3abcc7bda4a0e443fe6a032db6a3b5911c89d2ff0c2e5955664c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:24:31.742057 env[1119]: time="2024-02-08T23:24:31.742027357Z" level=info msg="CreateContainer within sandbox \"d9fecf7d78bf3abcc7bda4a0e443fe6a032db6a3b5911c89d2ff0c2e5955664c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6a22f94fa04318afba46bdb124debc8378082eed89544ef5e279e65fda182612\"" Feb 8 23:24:31.742474 env[1119]: time="2024-02-08T23:24:31.742446809Z" level=info msg="StartContainer for \"6a22f94fa04318afba46bdb124debc8378082eed89544ef5e279e65fda182612\"" Feb 8 23:24:31.756972 systemd[1]: Started cri-containerd-6a22f94fa04318afba46bdb124debc8378082eed89544ef5e279e65fda182612.scope. Feb 8 23:24:31.778793 env[1119]: time="2024-02-08T23:24:31.777746346Z" level=info msg="StartContainer for \"6a22f94fa04318afba46bdb124debc8378082eed89544ef5e279e65fda182612\" returns successfully" Feb 8 23:24:31.966693 kubelet[1392]: E0208 23:24:31.966605 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:32.181398 kubelet[1392]: I0208 23:24:32.181273 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9542033619999999 podCreationTimestamp="2024-02-08 23:24:23 +0000 UTC" firstStartedPulling="2024-02-08 23:24:24.502943084 +0000 UTC m=+35.883742814" lastFinishedPulling="2024-02-08 23:24:31.729970317 +0000 UTC m=+43.110770057" observedRunningTime="2024-02-08 23:24:32.181061386 +0000 UTC m=+43.561861106" watchObservedRunningTime="2024-02-08 23:24:32.181230605 +0000 UTC m=+43.562030335" Feb 8 23:24:32.967499 kubelet[1392]: E0208 23:24:32.967440 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:33.968538 kubelet[1392]: E0208 23:24:33.968484 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:34.969190 kubelet[1392]: E0208 23:24:34.969145 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:35.969949 kubelet[1392]: E0208 23:24:35.969890 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:36.970882 kubelet[1392]: E0208 23:24:36.970846 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:37.971669 kubelet[1392]: E0208 23:24:37.971613 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:38.972288 kubelet[1392]: E0208 23:24:38.972245 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:39.973424 kubelet[1392]: E0208 23:24:39.973340 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:40.973951 kubelet[1392]: E0208 23:24:40.973907 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:41.411219 kubelet[1392]: I0208 23:24:41.411103 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:41.415012 systemd[1]: Created slice kubepods-besteffort-podd8b63645_a33f_4568_b072_a75b1b543c07.slice. Feb 8 23:24:41.427697 kubelet[1392]: I0208 23:24:41.427660 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ea8de54-fc3e-469b-a483-799a9f20822f\" (UniqueName: \"kubernetes.io/nfs/d8b63645-a33f-4568-b072-a75b1b543c07-pvc-9ea8de54-fc3e-469b-a483-799a9f20822f\") pod \"test-pod-1\" (UID: \"d8b63645-a33f-4568-b072-a75b1b543c07\") " pod="default/test-pod-1" Feb 8 23:24:41.427785 kubelet[1392]: I0208 23:24:41.427707 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv7qd\" (UniqueName: \"kubernetes.io/projected/d8b63645-a33f-4568-b072-a75b1b543c07-kube-api-access-lv7qd\") pod \"test-pod-1\" (UID: \"d8b63645-a33f-4568-b072-a75b1b543c07\") " pod="default/test-pod-1" Feb 8 23:24:41.548400 kernel: FS-Cache: Loaded Feb 8 23:24:41.584993 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:24:41.585172 kernel: RPC: Registered udp transport module. Feb 8 23:24:41.585199 kernel: RPC: Registered tcp transport module. Feb 8 23:24:41.585537 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:24:41.631398 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:24:41.821404 kernel: NFS: Registering the id_resolver key type Feb 8 23:24:41.821566 kernel: Key type id_resolver registered Feb 8 23:24:41.821589 kernel: Key type id_legacy registered Feb 8 23:24:41.974531 kubelet[1392]: E0208 23:24:41.974466 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:42.101188 nfsidmap[2724]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 8 23:24:42.104041 nfsidmap[2727]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 8 23:24:42.317666 env[1119]: time="2024-02-08T23:24:42.317604911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d8b63645-a33f-4568-b072-a75b1b543c07,Namespace:default,Attempt:0,}" Feb 8 23:24:42.341332 systemd-networkd[1025]: lxc41f83696477b: Link UP Feb 8 23:24:42.348391 kernel: eth0: renamed from tmpc0bd6 Feb 8 23:24:42.353414 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:24:42.353469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc41f83696477b: link becomes ready Feb 8 23:24:42.353507 systemd-networkd[1025]: lxc41f83696477b: Gained carrier Feb 8 23:24:42.533974 env[1119]: time="2024-02-08T23:24:42.533876454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:42.533974 env[1119]: time="2024-02-08T23:24:42.533930125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:42.534173 env[1119]: time="2024-02-08T23:24:42.533942829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:42.534173 env[1119]: time="2024-02-08T23:24:42.534135501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f pid=2759 runtime=io.containerd.runc.v2 Feb 8 23:24:42.548396 systemd[1]: run-containerd-runc-k8s.io-c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f-runc.90jXpG.mount: Deactivated successfully. Feb 8 23:24:42.550537 systemd[1]: Started cri-containerd-c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f.scope. Feb 8 23:24:42.562043 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:24:42.582805 env[1119]: time="2024-02-08T23:24:42.582765397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d8b63645-a33f-4568-b072-a75b1b543c07,Namespace:default,Attempt:0,} returns sandbox id \"c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f\"" Feb 8 23:24:42.584225 env[1119]: time="2024-02-08T23:24:42.584208491Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:24:42.975504 kubelet[1392]: E0208 23:24:42.975444 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:43.027587 env[1119]: time="2024-02-08T23:24:43.027531524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.029115 env[1119]: time="2024-02-08T23:24:43.029086839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.030908 env[1119]: time="2024-02-08T23:24:43.030859905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.032178 env[1119]: time="2024-02-08T23:24:43.032131576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:43.032738 env[1119]: time="2024-02-08T23:24:43.032697260Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:24:43.034464 env[1119]: time="2024-02-08T23:24:43.034420100Z" level=info msg="CreateContainer within sandbox \"c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:24:43.046566 env[1119]: time="2024-02-08T23:24:43.046514459Z" level=info msg="CreateContainer within sandbox \"c0bd6ed3ab7fa6c8b6fdc32a95d7f3c53298667eac533b1b0b7c69c05d9c399f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4bf3e96fa0ee22c33305405466faa06e96dd00934e9f26d028528ea58f157408\"" Feb 8 23:24:43.047066 env[1119]: time="2024-02-08T23:24:43.047041450Z" level=info msg="StartContainer for \"4bf3e96fa0ee22c33305405466faa06e96dd00934e9f26d028528ea58f157408\"" Feb 8 23:24:43.059511 systemd[1]: Started cri-containerd-4bf3e96fa0ee22c33305405466faa06e96dd00934e9f26d028528ea58f157408.scope. Feb 8 23:24:43.080493 env[1119]: time="2024-02-08T23:24:43.080455917Z" level=info msg="StartContainer for \"4bf3e96fa0ee22c33305405466faa06e96dd00934e9f26d028528ea58f157408\" returns successfully" Feb 8 23:24:43.199514 kubelet[1392]: I0208 23:24:43.199473 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.750484796 podCreationTimestamp="2024-02-08 23:24:23 +0000 UTC" firstStartedPulling="2024-02-08 23:24:42.584001532 +0000 UTC m=+53.964801262" lastFinishedPulling="2024-02-08 23:24:43.03295259 +0000 UTC m=+54.413752320" observedRunningTime="2024-02-08 23:24:43.199283537 +0000 UTC m=+54.580083287" watchObservedRunningTime="2024-02-08 23:24:43.199435854 +0000 UTC m=+54.580235584" Feb 8 23:24:43.538591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485172657.mount: Deactivated successfully. Feb 8 23:24:43.877537 systemd-networkd[1025]: lxc41f83696477b: Gained IPv6LL Feb 8 23:24:43.975635 kubelet[1392]: E0208 23:24:43.975593 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:44.976196 kubelet[1392]: E0208 23:24:44.976132 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:45.976352 kubelet[1392]: E0208 23:24:45.976292 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:46.037574 env[1119]: time="2024-02-08T23:24:46.037510093Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:24:46.042705 env[1119]: time="2024-02-08T23:24:46.042668941Z" level=info msg="StopContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" with timeout 1 (s)" Feb 8 23:24:46.042954 env[1119]: time="2024-02-08T23:24:46.042931674Z" level=info msg="Stop container \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" with signal terminated" Feb 8 23:24:46.048207 systemd-networkd[1025]: lxc_health: Link DOWN Feb 8 23:24:46.048215 systemd-networkd[1025]: lxc_health: Lost carrier Feb 8 23:24:46.079647 systemd[1]: cri-containerd-70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3.scope: Deactivated successfully. Feb 8 23:24:46.079861 systemd[1]: cri-containerd-70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3.scope: Consumed 5.907s CPU time. Feb 8 23:24:46.095710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3-rootfs.mount: Deactivated successfully. Feb 8 23:24:46.104295 env[1119]: time="2024-02-08T23:24:46.104257126Z" level=info msg="shim disconnected" id=70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3 Feb 8 23:24:46.104429 env[1119]: time="2024-02-08T23:24:46.104299355Z" level=warning msg="cleaning up after shim disconnected" id=70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3 namespace=k8s.io Feb 8 23:24:46.104429 env[1119]: time="2024-02-08T23:24:46.104308402Z" level=info msg="cleaning up dead shim" Feb 8 23:24:46.111114 env[1119]: time="2024-02-08T23:24:46.111065514Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2887 runtime=io.containerd.runc.v2\n" Feb 8 23:24:46.114110 env[1119]: time="2024-02-08T23:24:46.114076695Z" level=info msg="StopContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" returns successfully" Feb 8 23:24:46.114704 env[1119]: time="2024-02-08T23:24:46.114677154Z" level=info msg="StopPodSandbox for \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\"" Feb 8 23:24:46.114777 env[1119]: time="2024-02-08T23:24:46.114737988Z" level=info msg="Container to stop \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:46.114777 env[1119]: time="2024-02-08T23:24:46.114755301Z" level=info msg="Container to stop \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:46.114777 env[1119]: time="2024-02-08T23:24:46.114765811Z" level=info msg="Container to stop \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:46.114777 env[1119]: time="2024-02-08T23:24:46.114775829Z" level=info msg="Container to stop \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:46.114978 env[1119]: time="2024-02-08T23:24:46.114787792Z" level=info msg="Container to stop \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:46.115993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e-shm.mount: Deactivated successfully. Feb 8 23:24:46.120309 systemd[1]: cri-containerd-e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e.scope: Deactivated successfully. Feb 8 23:24:46.134558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e-rootfs.mount: Deactivated successfully. Feb 8 23:24:46.140288 env[1119]: time="2024-02-08T23:24:46.140230761Z" level=info msg="shim disconnected" id=e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e Feb 8 23:24:46.140288 env[1119]: time="2024-02-08T23:24:46.140282949Z" level=warning msg="cleaning up after shim disconnected" id=e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e namespace=k8s.io Feb 8 23:24:46.140572 env[1119]: time="2024-02-08T23:24:46.140298127Z" level=info msg="cleaning up dead shim" Feb 8 23:24:46.147021 env[1119]: time="2024-02-08T23:24:46.146974107Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2917 runtime=io.containerd.runc.v2\n" Feb 8 23:24:46.147301 env[1119]: time="2024-02-08T23:24:46.147268821Z" level=info msg="TearDown network for sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" successfully" Feb 8 23:24:46.147301 env[1119]: time="2024-02-08T23:24:46.147295171Z" level=info msg="StopPodSandbox for \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" returns successfully" Feb 8 23:24:46.200025 kubelet[1392]: I0208 23:24:46.199990 1392 scope.go:115] "RemoveContainer" containerID="70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3" Feb 8 23:24:46.201015 env[1119]: time="2024-02-08T23:24:46.200987332Z" level=info msg="RemoveContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\"" Feb 8 23:24:46.206475 env[1119]: time="2024-02-08T23:24:46.206432549Z" level=info msg="RemoveContainer for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" returns successfully" Feb 8 23:24:46.206675 kubelet[1392]: I0208 23:24:46.206652 1392 scope.go:115] "RemoveContainer" containerID="7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd" Feb 8 23:24:46.207633 env[1119]: time="2024-02-08T23:24:46.207608910Z" level=info msg="RemoveContainer for \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\"" Feb 8 23:24:46.209953 env[1119]: time="2024-02-08T23:24:46.209928972Z" level=info msg="RemoveContainer for \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\" returns successfully" Feb 8 23:24:46.210048 kubelet[1392]: I0208 23:24:46.210030 1392 scope.go:115] "RemoveContainer" containerID="29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b" Feb 8 23:24:46.210748 env[1119]: time="2024-02-08T23:24:46.210725940Z" level=info msg="RemoveContainer for \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\"" Feb 8 23:24:46.213023 env[1119]: time="2024-02-08T23:24:46.212992471Z" level=info msg="RemoveContainer for \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\" returns successfully" Feb 8 23:24:46.213124 kubelet[1392]: I0208 23:24:46.213110 1392 scope.go:115] "RemoveContainer" containerID="41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76" Feb 8 23:24:46.213862 env[1119]: time="2024-02-08T23:24:46.213839713Z" level=info msg="RemoveContainer for \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\"" Feb 8 23:24:46.216092 env[1119]: time="2024-02-08T23:24:46.216070737Z" level=info msg="RemoveContainer for \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\" returns successfully" Feb 8 23:24:46.216188 kubelet[1392]: I0208 23:24:46.216174 1392 scope.go:115] "RemoveContainer" containerID="04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913" Feb 8 23:24:46.216862 env[1119]: time="2024-02-08T23:24:46.216825226Z" level=info msg="RemoveContainer for \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\"" Feb 8 23:24:46.219276 env[1119]: time="2024-02-08T23:24:46.219253851Z" level=info msg="RemoveContainer for \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\" returns successfully" Feb 8 23:24:46.219364 kubelet[1392]: I0208 23:24:46.219349 1392 scope.go:115] "RemoveContainer" containerID="70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3" Feb 8 23:24:46.219579 env[1119]: time="2024-02-08T23:24:46.219514491Z" level=error msg="ContainerStatus for \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\": not found" Feb 8 23:24:46.219695 kubelet[1392]: E0208 23:24:46.219681 1392 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\": not found" containerID="70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3" Feb 8 23:24:46.219741 kubelet[1392]: I0208 23:24:46.219714 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3} err="failed to get container status \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\": rpc error: code = NotFound desc = an error occurred when try to find container \"70895a37d161fac2fd04fbfe891b69ef855e9c8c6e1615a9c98bf44dbae1aec3\": not found" Feb 8 23:24:46.219741 kubelet[1392]: I0208 23:24:46.219726 1392 scope.go:115] "RemoveContainer" containerID="7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd" Feb 8 23:24:46.219860 env[1119]: time="2024-02-08T23:24:46.219830615Z" level=error msg="ContainerStatus for \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\": not found" Feb 8 23:24:46.219968 kubelet[1392]: E0208 23:24:46.219954 1392 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\": not found" containerID="7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd" Feb 8 23:24:46.220014 kubelet[1392]: I0208 23:24:46.219980 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd} err="failed to get container status \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7577b5d6e2e26367a62ebcf2cbbe616993999936b37eb51f7accbdffeed7a6bd\": not found" Feb 8 23:24:46.220014 kubelet[1392]: I0208 23:24:46.219987 1392 scope.go:115] "RemoveContainer" containerID="29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b" Feb 8 23:24:46.220115 env[1119]: time="2024-02-08T23:24:46.220088440Z" level=error msg="ContainerStatus for \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\": not found" Feb 8 23:24:46.220211 kubelet[1392]: E0208 23:24:46.220197 1392 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\": not found" containerID="29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b" Feb 8 23:24:46.220257 kubelet[1392]: I0208 23:24:46.220221 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b} err="failed to get container status \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\": rpc error: code = NotFound desc = an error occurred when try to find container \"29be79cef3fbe62bae6c60a69f8fc0a8558dc8e3314257de4b54a8e54d85008b\": not found" Feb 8 23:24:46.220257 kubelet[1392]: I0208 23:24:46.220230 1392 scope.go:115] "RemoveContainer" containerID="41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76" Feb 8 23:24:46.220363 env[1119]: time="2024-02-08T23:24:46.220330165Z" level=error msg="ContainerStatus for \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\": not found" Feb 8 23:24:46.220440 kubelet[1392]: E0208 23:24:46.220428 1392 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\": not found" containerID="41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76" Feb 8 23:24:46.220473 kubelet[1392]: I0208 23:24:46.220453 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76} err="failed to get container status \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\": rpc error: code = NotFound desc = an error occurred when try to find container \"41849f9ae062fcec4cba71afd0981126d99cad5451348e3e96e2d5bdb31ddc76\": not found" Feb 8 23:24:46.220473 kubelet[1392]: I0208 23:24:46.220463 1392 scope.go:115] "RemoveContainer" containerID="04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913" Feb 8 23:24:46.220647 env[1119]: time="2024-02-08T23:24:46.220602096Z" level=error msg="ContainerStatus for \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\": not found" Feb 8 23:24:46.220742 kubelet[1392]: E0208 23:24:46.220730 1392 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\": not found" containerID="04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913" Feb 8 23:24:46.220770 kubelet[1392]: I0208 23:24:46.220761 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913} err="failed to get container status \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\": rpc error: code = NotFound desc = an error occurred when try to find container \"04fa0e99e49fecb4aa282407e0f0d82637ed0b4b76327d9d2ebd497b8c7e8913\": not found" Feb 8 23:24:46.254195 kubelet[1392]: I0208 23:24:46.254110 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-hostproc\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254277 kubelet[1392]: I0208 23:24:46.254198 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-xtables-lock\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254277 kubelet[1392]: I0208 23:24:46.254229 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-config-path\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254277 kubelet[1392]: I0208 23:24:46.254251 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-run\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254277 kubelet[1392]: I0208 23:24:46.254272 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-bpf-maps\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254383 kubelet[1392]: I0208 23:24:46.254298 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f16fc3b6-f120-4dc7-a106-2998161b5be3-clustermesh-secrets\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254383 kubelet[1392]: I0208 23:24:46.254321 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl5xh\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-kube-api-access-hl5xh\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254383 kubelet[1392]: I0208 23:24:46.254342 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cni-path\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254446 kubelet[1392]: I0208 23:24:46.254382 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-hubble-tls\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254446 kubelet[1392]: I0208 23:24:46.254408 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-kernel\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254446 kubelet[1392]: I0208 23:24:46.254429 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-etc-cni-netd\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254514 kubelet[1392]: I0208 23:24:46.254452 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-net\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254514 kubelet[1392]: W0208 23:24:46.254444 1392 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f16fc3b6-f120-4dc7-a106-2998161b5be3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:24:46.254514 kubelet[1392]: I0208 23:24:46.254472 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-cgroup\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254514 kubelet[1392]: I0208 23:24:46.254492 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-lib-modules\") pod \"f16fc3b6-f120-4dc7-a106-2998161b5be3\" (UID: \"f16fc3b6-f120-4dc7-a106-2998161b5be3\") " Feb 8 23:24:46.254658 kubelet[1392]: I0208 23:24:46.254522 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.254658 kubelet[1392]: I0208 23:24:46.254554 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.254658 kubelet[1392]: I0208 23:24:46.254572 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.254658 kubelet[1392]: I0208 23:24:46.254265 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.254658 kubelet[1392]: I0208 23:24:46.254171 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-hostproc" (OuterVolumeSpecName: "hostproc") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.254944 kubelet[1392]: I0208 23:24:46.254827 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.255174 kubelet[1392]: I0208 23:24:46.255013 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cni-path" (OuterVolumeSpecName: "cni-path") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.255174 kubelet[1392]: I0208 23:24:46.255039 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.255174 kubelet[1392]: I0208 23:24:46.255058 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.255174 kubelet[1392]: I0208 23:24:46.255074 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:46.257267 kubelet[1392]: I0208 23:24:46.257247 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-kube-api-access-hl5xh" (OuterVolumeSpecName: "kube-api-access-hl5xh") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "kube-api-access-hl5xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:46.257686 kubelet[1392]: I0208 23:24:46.257671 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:24:46.257981 kubelet[1392]: I0208 23:24:46.257965 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f16fc3b6-f120-4dc7-a106-2998161b5be3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:46.258058 kubelet[1392]: I0208 23:24:46.257967 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f16fc3b6-f120-4dc7-a106-2998161b5be3" (UID: "f16fc3b6-f120-4dc7-a106-2998161b5be3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:46.258604 systemd[1]: var-lib-kubelet-pods-f16fc3b6\x2df120\x2d4dc7\x2da106\x2d2998161b5be3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhl5xh.mount: Deactivated successfully. Feb 8 23:24:46.355266 kubelet[1392]: I0208 23:24:46.355228 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-cgroup\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355266 kubelet[1392]: I0208 23:24:46.355256 1392 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-lib-modules\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355266 kubelet[1392]: I0208 23:24:46.355266 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-net\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355266 kubelet[1392]: I0208 23:24:46.355274 1392 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-hostproc\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355283 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-config-path\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355291 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cilium-run\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355299 1392 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-bpf-maps\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355306 1392 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-xtables-lock\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355313 1392 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-cni-path\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355320 1392 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-hubble-tls\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355328 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-host-proc-sys-kernel\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355470 kubelet[1392]: I0208 23:24:46.355336 1392 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f16fc3b6-f120-4dc7-a106-2998161b5be3-etc-cni-netd\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355634 kubelet[1392]: I0208 23:24:46.355343 1392 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f16fc3b6-f120-4dc7-a106-2998161b5be3-clustermesh-secrets\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.355634 kubelet[1392]: I0208 23:24:46.355352 1392 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hl5xh\" (UniqueName: \"kubernetes.io/projected/f16fc3b6-f120-4dc7-a106-2998161b5be3-kube-api-access-hl5xh\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:46.503987 systemd[1]: Removed slice kubepods-burstable-podf16fc3b6_f120_4dc7_a106_2998161b5be3.slice. Feb 8 23:24:46.504088 systemd[1]: kubepods-burstable-podf16fc3b6_f120_4dc7_a106_2998161b5be3.slice: Consumed 5.986s CPU time. Feb 8 23:24:46.976992 kubelet[1392]: E0208 23:24:46.976917 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:47.027927 systemd[1]: var-lib-kubelet-pods-f16fc3b6\x2df120\x2d4dc7\x2da106\x2d2998161b5be3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:24:47.028045 systemd[1]: var-lib-kubelet-pods-f16fc3b6\x2df120\x2d4dc7\x2da106\x2d2998161b5be3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:47.082157 kubelet[1392]: I0208 23:24:47.082116 1392 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f16fc3b6-f120-4dc7-a106-2998161b5be3 path="/var/lib/kubelet/pods/f16fc3b6-f120-4dc7-a106-2998161b5be3/volumes" Feb 8 23:24:47.977682 kubelet[1392]: E0208 23:24:47.977635 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:48.555214 kubelet[1392]: I0208 23:24:48.555174 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:48.555387 kubelet[1392]: E0208 23:24:48.555232 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="apply-sysctl-overwrites" Feb 8 23:24:48.555387 kubelet[1392]: E0208 23:24:48.555242 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="mount-bpf-fs" Feb 8 23:24:48.555387 kubelet[1392]: E0208 23:24:48.555251 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="clean-cilium-state" Feb 8 23:24:48.555387 kubelet[1392]: E0208 23:24:48.555258 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="cilium-agent" Feb 8 23:24:48.555387 kubelet[1392]: E0208 23:24:48.555265 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="mount-cgroup" Feb 8 23:24:48.555387 kubelet[1392]: I0208 23:24:48.555286 1392 memory_manager.go:346] "RemoveStaleState removing state" podUID="f16fc3b6-f120-4dc7-a106-2998161b5be3" containerName="cilium-agent" Feb 8 23:24:48.555735 kubelet[1392]: I0208 23:24:48.555709 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:48.560050 systemd[1]: Created slice kubepods-burstable-pod594e68a1_87cb_4af6_9290_911b1e0882e8.slice. Feb 8 23:24:48.578000 systemd[1]: Created slice kubepods-besteffort-pode533728c_03e7_4d85_90e6_9a2537498765.slice. Feb 8 23:24:48.666847 kubelet[1392]: I0208 23:24:48.666813 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-run\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.666847 kubelet[1392]: I0208 23:24:48.666863 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-etc-cni-netd\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666882 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-lib-modules\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666901 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-config-path\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666916 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-hubble-tls\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666931 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495qq\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-kube-api-access-495qq\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666951 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-hostproc\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667086 kubelet[1392]: I0208 23:24:48.666989 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-cgroup\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667006 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cni-path\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667021 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-xtables-lock\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667039 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-clustermesh-secrets\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667091 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-bpf-maps\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667132 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-ipsec-secrets\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667292 kubelet[1392]: I0208 23:24:48.667155 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-net\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667470 kubelet[1392]: I0208 23:24:48.667180 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-kernel\") pod \"cilium-97dqf\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " pod="kube-system/cilium-97dqf" Feb 8 23:24:48.667470 kubelet[1392]: I0208 23:24:48.667210 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e533728c-03e7-4d85-90e6-9a2537498765-cilium-config-path\") pod \"cilium-operator-574c4bb98d-gh6hb\" (UID: \"e533728c-03e7-4d85-90e6-9a2537498765\") " pod="kube-system/cilium-operator-574c4bb98d-gh6hb" Feb 8 23:24:48.667470 kubelet[1392]: I0208 23:24:48.667235 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqlv\" (UniqueName: \"kubernetes.io/projected/e533728c-03e7-4d85-90e6-9a2537498765-kube-api-access-rdqlv\") pod \"cilium-operator-574c4bb98d-gh6hb\" (UID: \"e533728c-03e7-4d85-90e6-9a2537498765\") " pod="kube-system/cilium-operator-574c4bb98d-gh6hb" Feb 8 23:24:48.877106 kubelet[1392]: E0208 23:24:48.876997 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:48.877613 env[1119]: time="2024-02-08T23:24:48.877572208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97dqf,Uid:594e68a1-87cb-4af6-9290-911b1e0882e8,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:48.880790 kubelet[1392]: E0208 23:24:48.880773 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:48.881083 env[1119]: time="2024-02-08T23:24:48.881056055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gh6hb,Uid:e533728c-03e7-4d85-90e6-9a2537498765,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:48.890438 env[1119]: time="2024-02-08T23:24:48.890384366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:48.890438 env[1119]: time="2024-02-08T23:24:48.890415725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:48.890603 env[1119]: time="2024-02-08T23:24:48.890424902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:48.890771 env[1119]: time="2024-02-08T23:24:48.890739774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124 pid=2945 runtime=io.containerd.runc.v2 Feb 8 23:24:48.894838 env[1119]: time="2024-02-08T23:24:48.894780528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:48.894960 env[1119]: time="2024-02-08T23:24:48.894827015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:48.894960 env[1119]: time="2024-02-08T23:24:48.894843846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:48.895053 env[1119]: time="2024-02-08T23:24:48.894959083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6f29955952288f66a8ad1b2a414b3444d794e0ce959f53766b9d5070e0cbbf9 pid=2961 runtime=io.containerd.runc.v2 Feb 8 23:24:48.900715 systemd[1]: Started cri-containerd-69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124.scope. Feb 8 23:24:48.907555 systemd[1]: Started cri-containerd-b6f29955952288f66a8ad1b2a414b3444d794e0ce959f53766b9d5070e0cbbf9.scope. Feb 8 23:24:48.921211 env[1119]: time="2024-02-08T23:24:48.921154591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97dqf,Uid:594e68a1-87cb-4af6-9290-911b1e0882e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\"" Feb 8 23:24:48.922401 kubelet[1392]: E0208 23:24:48.922246 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:48.924201 env[1119]: time="2024-02-08T23:24:48.924164678Z" level=info msg="CreateContainer within sandbox \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:48.934898 kubelet[1392]: E0208 23:24:48.934868 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:48.935465 env[1119]: time="2024-02-08T23:24:48.935433976Z" level=info msg="CreateContainer within sandbox \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\"" Feb 8 23:24:48.935843 env[1119]: time="2024-02-08T23:24:48.935816274Z" level=info msg="StartContainer for \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\"" Feb 8 23:24:48.939575 env[1119]: time="2024-02-08T23:24:48.939542095Z" level=info msg="StopPodSandbox for \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\"" Feb 8 23:24:48.939702 env[1119]: time="2024-02-08T23:24:48.939605014Z" level=info msg="TearDown network for sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" successfully" Feb 8 23:24:48.939702 env[1119]: time="2024-02-08T23:24:48.939633327Z" level=info msg="StopPodSandbox for \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" returns successfully" Feb 8 23:24:48.939927 env[1119]: time="2024-02-08T23:24:48.939908354Z" level=info msg="RemovePodSandbox for \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\"" Feb 8 23:24:48.939983 env[1119]: time="2024-02-08T23:24:48.939927490Z" level=info msg="Forcibly stopping sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\"" Feb 8 23:24:48.939983 env[1119]: time="2024-02-08T23:24:48.939970561Z" level=info msg="TearDown network for sandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" successfully" Feb 8 23:24:48.944052 env[1119]: time="2024-02-08T23:24:48.944012957Z" level=info msg="RemovePodSandbox \"e84ce84f9465dce90800b88d652945202c993a539d23e23230d6cdfbea09389e\" returns successfully" Feb 8 23:24:48.947011 env[1119]: time="2024-02-08T23:24:48.946971557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gh6hb,Uid:e533728c-03e7-4d85-90e6-9a2537498765,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6f29955952288f66a8ad1b2a414b3444d794e0ce959f53766b9d5070e0cbbf9\"" Feb 8 23:24:48.947679 kubelet[1392]: E0208 23:24:48.947651 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:48.949102 env[1119]: time="2024-02-08T23:24:48.949067787Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:24:48.949643 systemd[1]: Started cri-containerd-e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2.scope. Feb 8 23:24:48.958339 systemd[1]: cri-containerd-e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2.scope: Deactivated successfully. Feb 8 23:24:48.958566 systemd[1]: Stopped cri-containerd-e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2.scope. Feb 8 23:24:48.972339 env[1119]: time="2024-02-08T23:24:48.972288535Z" level=info msg="shim disconnected" id=e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2 Feb 8 23:24:48.972489 env[1119]: time="2024-02-08T23:24:48.972344009Z" level=warning msg="cleaning up after shim disconnected" id=e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2 namespace=k8s.io Feb 8 23:24:48.972489 env[1119]: time="2024-02-08T23:24:48.972358786Z" level=info msg="cleaning up dead shim" Feb 8 23:24:48.978386 kubelet[1392]: E0208 23:24:48.978324 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:48.978723 env[1119]: time="2024-02-08T23:24:48.978687570Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3046 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:24:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:24:48.979082 env[1119]: time="2024-02-08T23:24:48.978980269Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 8 23:24:48.979451 env[1119]: time="2024-02-08T23:24:48.979414576Z" level=error msg="Failed to pipe stdout of container \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\"" error="reading from a closed fifo" Feb 8 23:24:48.979504 env[1119]: time="2024-02-08T23:24:48.979413744Z" level=error msg="Failed to pipe stderr of container \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\"" error="reading from a closed fifo" Feb 8 23:24:48.982211 env[1119]: time="2024-02-08T23:24:48.982168401Z" level=error msg="StartContainer for \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:24:48.982427 kubelet[1392]: E0208 23:24:48.982406 1392 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2" Feb 8 23:24:48.982561 kubelet[1392]: E0208 23:24:48.982545 1392 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:24:48.982561 kubelet[1392]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:24:48.982561 kubelet[1392]: rm /hostbin/cilium-mount Feb 8 23:24:48.982682 kubelet[1392]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-495qq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-97dqf_kube-system(594e68a1-87cb-4af6-9290-911b1e0882e8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:24:48.982682 kubelet[1392]: E0208 23:24:48.982595 1392 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97dqf" podUID=594e68a1-87cb-4af6-9290-911b1e0882e8 Feb 8 23:24:49.183391 kubelet[1392]: E0208 23:24:49.183310 1392 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:24:49.206283 env[1119]: time="2024-02-08T23:24:49.206247534Z" level=info msg="StopPodSandbox for \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\"" Feb 8 23:24:49.206385 env[1119]: time="2024-02-08T23:24:49.206303018Z" level=info msg="Container to stop \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:24:49.211131 systemd[1]: cri-containerd-69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124.scope: Deactivated successfully. Feb 8 23:24:49.229562 env[1119]: time="2024-02-08T23:24:49.229507579Z" level=info msg="shim disconnected" id=69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124 Feb 8 23:24:49.229562 env[1119]: time="2024-02-08T23:24:49.229553345Z" level=warning msg="cleaning up after shim disconnected" id=69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124 namespace=k8s.io Feb 8 23:24:49.229562 env[1119]: time="2024-02-08T23:24:49.229563424Z" level=info msg="cleaning up dead shim" Feb 8 23:24:49.235468 env[1119]: time="2024-02-08T23:24:49.235432923Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\n" Feb 8 23:24:49.235740 env[1119]: time="2024-02-08T23:24:49.235709994Z" level=info msg="TearDown network for sandbox \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\" successfully" Feb 8 23:24:49.235774 env[1119]: time="2024-02-08T23:24:49.235737886Z" level=info msg="StopPodSandbox for \"69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124\" returns successfully" Feb 8 23:24:49.271133 kubelet[1392]: I0208 23:24:49.271102 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-etc-cni-netd\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271133 kubelet[1392]: I0208 23:24:49.271133 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271157 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-cgroup\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271192 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-clustermesh-secrets\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271197 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271219 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-config-path\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271241 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-lib-modules\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271265 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-495qq\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-kube-api-access-495qq\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271284 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-net\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271288 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271305 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-kernel\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271332 kubelet[1392]: I0208 23:24:49.271327 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-xtables-lock\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271347 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-hostproc\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271386 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cni-path\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271392 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271410 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-run\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271433 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-bpf-maps\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271458 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-ipsec-secrets\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271484 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-hubble-tls\") pod \"594e68a1-87cb-4af6-9290-911b1e0882e8\" (UID: \"594e68a1-87cb-4af6-9290-911b1e0882e8\") " Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271515 1392 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-etc-cni-netd\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271528 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-cgroup\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271539 1392 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-lib-modules\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271552 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-kernel\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271616 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271634 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271647 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271658 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.271689 kubelet[1392]: I0208 23:24:49.271669 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.272174 kubelet[1392]: I0208 23:24:49.271681 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:24:49.272174 kubelet[1392]: W0208 23:24:49.271866 1392 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/594e68a1-87cb-4af6-9290-911b1e0882e8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:24:49.273550 kubelet[1392]: I0208 23:24:49.273530 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:24:49.273702 kubelet[1392]: I0208 23:24:49.273665 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:49.274578 kubelet[1392]: I0208 23:24:49.274546 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:24:49.274650 kubelet[1392]: I0208 23:24:49.274579 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:49.275592 kubelet[1392]: I0208 23:24:49.275562 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-kube-api-access-495qq" (OuterVolumeSpecName: "kube-api-access-495qq") pod "594e68a1-87cb-4af6-9290-911b1e0882e8" (UID: "594e68a1-87cb-4af6-9290-911b1e0882e8"). InnerVolumeSpecName "kube-api-access-495qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:24:49.371925 kubelet[1392]: I0208 23:24:49.371894 1392 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cni-path\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.371925 kubelet[1392]: I0208 23:24:49.371917 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-run\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.371925 kubelet[1392]: I0208 23:24:49.371926 1392 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-hostproc\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371935 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-ipsec-secrets\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371944 1392 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-hubble-tls\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371953 1392 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-bpf-maps\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371962 1392 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/594e68a1-87cb-4af6-9290-911b1e0882e8-clustermesh-secrets\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371970 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/594e68a1-87cb-4af6-9290-911b1e0882e8-cilium-config-path\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371978 1392 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-495qq\" (UniqueName: \"kubernetes.io/projected/594e68a1-87cb-4af6-9290-911b1e0882e8-kube-api-access-495qq\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371985 1392 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-xtables-lock\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.372089 kubelet[1392]: I0208 23:24:49.371993 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/594e68a1-87cb-4af6-9290-911b1e0882e8-host-proc-sys-net\") on node \"10.0.0.86\" DevicePath \"\"" Feb 8 23:24:49.771924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69df07b3d32e598816341f7b54b455ba73b2814145fd33515cd1fa0ba82ca124-shm.mount: Deactivated successfully. Feb 8 23:24:49.772031 systemd[1]: var-lib-kubelet-pods-594e68a1\x2d87cb\x2d4af6\x2d9290\x2d911b1e0882e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d495qq.mount: Deactivated successfully. Feb 8 23:24:49.772103 systemd[1]: var-lib-kubelet-pods-594e68a1\x2d87cb\x2d4af6\x2d9290\x2d911b1e0882e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:24:49.772168 systemd[1]: var-lib-kubelet-pods-594e68a1\x2d87cb\x2d4af6\x2d9290\x2d911b1e0882e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:49.772235 systemd[1]: var-lib-kubelet-pods-594e68a1\x2d87cb\x2d4af6\x2d9290\x2d911b1e0882e8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:24:49.979182 kubelet[1392]: E0208 23:24:49.979129 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:50.209033 kubelet[1392]: I0208 23:24:50.209015 1392 scope.go:115] "RemoveContainer" containerID="e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2" Feb 8 23:24:50.209741 env[1119]: time="2024-02-08T23:24:50.209715804Z" level=info msg="RemoveContainer for \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\"" Feb 8 23:24:50.213332 systemd[1]: Removed slice kubepods-burstable-pod594e68a1_87cb_4af6_9290_911b1e0882e8.slice. Feb 8 23:24:50.214819 env[1119]: time="2024-02-08T23:24:50.214789326Z" level=info msg="RemoveContainer for \"e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2\" returns successfully" Feb 8 23:24:50.245925 kubelet[1392]: I0208 23:24:50.245796 1392 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:24:50.245925 kubelet[1392]: E0208 23:24:50.245884 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="594e68a1-87cb-4af6-9290-911b1e0882e8" containerName="mount-cgroup" Feb 8 23:24:50.245925 kubelet[1392]: I0208 23:24:50.245911 1392 memory_manager.go:346] "RemoveStaleState removing state" podUID="594e68a1-87cb-4af6-9290-911b1e0882e8" containerName="mount-cgroup" Feb 8 23:24:50.250591 systemd[1]: Created slice kubepods-burstable-pod3d7261f4_01fd_41b0_bd7a_d4d3964f029f.slice. Feb 8 23:24:50.276983 kubelet[1392]: I0208 23:24:50.276957 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-xtables-lock\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277081 kubelet[1392]: I0208 23:24:50.277000 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-cilium-ipsec-secrets\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277081 kubelet[1392]: I0208 23:24:50.277022 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-bpf-maps\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277081 kubelet[1392]: I0208 23:24:50.277039 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-cilium-cgroup\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277081 kubelet[1392]: I0208 23:24:50.277056 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-etc-cni-netd\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277081 kubelet[1392]: I0208 23:24:50.277080 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-host-proc-sys-net\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277255 kubelet[1392]: I0208 23:24:50.277127 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28lw\" (UniqueName: \"kubernetes.io/projected/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-kube-api-access-h28lw\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277255 kubelet[1392]: I0208 23:24:50.277151 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-cni-path\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277255 kubelet[1392]: I0208 23:24:50.277195 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-hubble-tls\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277255 kubelet[1392]: I0208 23:24:50.277248 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-cilium-run\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277355 kubelet[1392]: I0208 23:24:50.277284 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-clustermesh-secrets\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277355 kubelet[1392]: I0208 23:24:50.277301 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-cilium-config-path\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277355 kubelet[1392]: I0208 23:24:50.277344 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-hostproc\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277467 kubelet[1392]: I0208 23:24:50.277394 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-lib-modules\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.277467 kubelet[1392]: I0208 23:24:50.277420 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d7261f4-01fd-41b0-bd7a-d4d3964f029f-host-proc-sys-kernel\") pod \"cilium-d7m7f\" (UID: \"3d7261f4-01fd-41b0-bd7a-d4d3964f029f\") " pod="kube-system/cilium-d7m7f" Feb 8 23:24:50.560661 kubelet[1392]: E0208 23:24:50.559573 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:50.567099 env[1119]: time="2024-02-08T23:24:50.567014605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7m7f,Uid:3d7261f4-01fd-41b0-bd7a-d4d3964f029f,Namespace:kube-system,Attempt:0,}" Feb 8 23:24:50.589888 env[1119]: time="2024-02-08T23:24:50.589750840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:24:50.589888 env[1119]: time="2024-02-08T23:24:50.589789623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:24:50.589888 env[1119]: time="2024-02-08T23:24:50.589802317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:24:50.593865 env[1119]: time="2024-02-08T23:24:50.589986643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06 pid=3108 runtime=io.containerd.runc.v2 Feb 8 23:24:50.604849 systemd[1]: Started cri-containerd-03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06.scope. Feb 8 23:24:50.623333 env[1119]: time="2024-02-08T23:24:50.623293036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7m7f,Uid:3d7261f4-01fd-41b0-bd7a-d4d3964f029f,Namespace:kube-system,Attempt:0,} returns sandbox id \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\"" Feb 8 23:24:50.623952 kubelet[1392]: E0208 23:24:50.623934 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:50.625425 env[1119]: time="2024-02-08T23:24:50.625398152Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:24:50.636461 env[1119]: time="2024-02-08T23:24:50.636428395Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37\"" Feb 8 23:24:50.636790 env[1119]: time="2024-02-08T23:24:50.636767191Z" level=info msg="StartContainer for \"c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37\"" Feb 8 23:24:50.647797 systemd[1]: Started cri-containerd-c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37.scope. Feb 8 23:24:50.667840 env[1119]: time="2024-02-08T23:24:50.667789223Z" level=info msg="StartContainer for \"c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37\" returns successfully" Feb 8 23:24:50.671993 systemd[1]: cri-containerd-c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37.scope: Deactivated successfully. Feb 8 23:24:50.722893 env[1119]: time="2024-02-08T23:24:50.722849006Z" level=info msg="shim disconnected" id=c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37 Feb 8 23:24:50.722893 env[1119]: time="2024-02-08T23:24:50.722885595Z" level=warning msg="cleaning up after shim disconnected" id=c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37 namespace=k8s.io Feb 8 23:24:50.722893 env[1119]: time="2024-02-08T23:24:50.722893750Z" level=info msg="cleaning up dead shim" Feb 8 23:24:50.728767 env[1119]: time="2024-02-08T23:24:50.728733461Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3191 runtime=io.containerd.runc.v2\n" Feb 8 23:24:50.980024 kubelet[1392]: E0208 23:24:50.979973 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:51.081583 kubelet[1392]: I0208 23:24:51.081542 1392 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=594e68a1-87cb-4af6-9290-911b1e0882e8 path="/var/lib/kubelet/pods/594e68a1-87cb-4af6-9290-911b1e0882e8/volumes" Feb 8 23:24:51.212151 kubelet[1392]: E0208 23:24:51.212118 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:51.213661 env[1119]: time="2024-02-08T23:24:51.213623712Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:24:51.224394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736050733.mount: Deactivated successfully. Feb 8 23:24:51.227082 env[1119]: time="2024-02-08T23:24:51.227044523Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d\"" Feb 8 23:24:51.227510 env[1119]: time="2024-02-08T23:24:51.227487145Z" level=info msg="StartContainer for \"00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d\"" Feb 8 23:24:51.241865 systemd[1]: Started cri-containerd-00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d.scope. Feb 8 23:24:51.262288 env[1119]: time="2024-02-08T23:24:51.262240750Z" level=info msg="StartContainer for \"00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d\" returns successfully" Feb 8 23:24:51.265068 systemd[1]: cri-containerd-00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d.scope: Deactivated successfully. Feb 8 23:24:51.439499 env[1119]: time="2024-02-08T23:24:51.439449196Z" level=info msg="shim disconnected" id=00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d Feb 8 23:24:51.439499 env[1119]: time="2024-02-08T23:24:51.439497447Z" level=warning msg="cleaning up after shim disconnected" id=00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d namespace=k8s.io Feb 8 23:24:51.439499 env[1119]: time="2024-02-08T23:24:51.439506554Z" level=info msg="cleaning up dead shim" Feb 8 23:24:51.445894 env[1119]: time="2024-02-08T23:24:51.445859147Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3253 runtime=io.containerd.runc.v2\n" Feb 8 23:24:51.465336 env[1119]: time="2024-02-08T23:24:51.465305076Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:51.467441 env[1119]: time="2024-02-08T23:24:51.467415261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:51.469067 env[1119]: time="2024-02-08T23:24:51.469017991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:24:51.469590 env[1119]: time="2024-02-08T23:24:51.469558126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:24:51.471044 env[1119]: time="2024-02-08T23:24:51.471019871Z" level=info msg="CreateContainer within sandbox \"b6f29955952288f66a8ad1b2a414b3444d794e0ce959f53766b9d5070e0cbbf9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:24:51.483023 env[1119]: time="2024-02-08T23:24:51.482970671Z" level=info msg="CreateContainer within sandbox \"b6f29955952288f66a8ad1b2a414b3444d794e0ce959f53766b9d5070e0cbbf9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fd2b694871acf806289f0884db9a32d32b90dbec83118c3cbbd5bd3ecf45309c\"" Feb 8 23:24:51.483454 env[1119]: time="2024-02-08T23:24:51.483418753Z" level=info msg="StartContainer for \"fd2b694871acf806289f0884db9a32d32b90dbec83118c3cbbd5bd3ecf45309c\"" Feb 8 23:24:51.497061 systemd[1]: Started cri-containerd-fd2b694871acf806289f0884db9a32d32b90dbec83118c3cbbd5bd3ecf45309c.scope. Feb 8 23:24:51.522184 env[1119]: time="2024-02-08T23:24:51.522125013Z" level=info msg="StartContainer for \"fd2b694871acf806289f0884db9a32d32b90dbec83118c3cbbd5bd3ecf45309c\" returns successfully" Feb 8 23:24:51.772272 systemd[1]: run-containerd-runc-k8s.io-00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d-runc.69eCsf.mount: Deactivated successfully. Feb 8 23:24:51.772358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d-rootfs.mount: Deactivated successfully. Feb 8 23:24:51.805949 kubelet[1392]: I0208 23:24:51.805914 1392 setters.go:548] "Node became not ready" node="10.0.0.86" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:24:51.805852956 +0000 UTC m=+63.186652686 LastTransitionTime:2024-02-08 23:24:51.805852956 +0000 UTC m=+63.186652686 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:24:51.980293 kubelet[1392]: E0208 23:24:51.980239 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:52.077386 kubelet[1392]: W0208 23:24:52.077256 1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594e68a1_87cb_4af6_9290_911b1e0882e8.slice/cri-containerd-e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2.scope WatchSource:0}: container "e7373d8f350f4ead75ecc6aad698a4ce3a84bd0491dd3c208eb260a7bbf97be2" in namespace "k8s.io": not found Feb 8 23:24:52.216476 kubelet[1392]: E0208 23:24:52.216438 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:52.218050 kubelet[1392]: E0208 23:24:52.218009 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:52.218187 env[1119]: time="2024-02-08T23:24:52.217999247Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:24:52.230756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951317992.mount: Deactivated successfully. Feb 8 23:24:52.234482 env[1119]: time="2024-02-08T23:24:52.234434738Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548\"" Feb 8 23:24:52.234954 env[1119]: time="2024-02-08T23:24:52.234915651Z" level=info msg="StartContainer for \"4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548\"" Feb 8 23:24:52.252982 systemd[1]: Started cri-containerd-4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548.scope. Feb 8 23:24:52.278016 systemd[1]: cri-containerd-4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548.scope: Deactivated successfully. Feb 8 23:24:52.281321 env[1119]: time="2024-02-08T23:24:52.281273084Z" level=info msg="StartContainer for \"4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548\" returns successfully" Feb 8 23:24:52.301709 env[1119]: time="2024-02-08T23:24:52.301654415Z" level=info msg="shim disconnected" id=4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548 Feb 8 23:24:52.301709 env[1119]: time="2024-02-08T23:24:52.301710290Z" level=warning msg="cleaning up after shim disconnected" id=4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548 namespace=k8s.io Feb 8 23:24:52.301912 env[1119]: time="2024-02-08T23:24:52.301722393Z" level=info msg="cleaning up dead shim" Feb 8 23:24:52.308345 env[1119]: time="2024-02-08T23:24:52.308306600Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3348 runtime=io.containerd.runc.v2\n" Feb 8 23:24:52.771754 systemd[1]: run-containerd-runc-k8s.io-4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548-runc.iv5omO.mount: Deactivated successfully. Feb 8 23:24:52.771865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548-rootfs.mount: Deactivated successfully. Feb 8 23:24:52.981280 kubelet[1392]: E0208 23:24:52.981219 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:53.220665 kubelet[1392]: E0208 23:24:53.220637 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:53.220665 kubelet[1392]: E0208 23:24:53.220637 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:53.222559 env[1119]: time="2024-02-08T23:24:53.222504683Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:24:53.232554 kubelet[1392]: I0208 23:24:53.232517 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-gh6hb" podStartSLOduration=2.711492614 podCreationTimestamp="2024-02-08 23:24:48 +0000 UTC" firstStartedPulling="2024-02-08 23:24:48.948795395 +0000 UTC m=+60.329595125" lastFinishedPulling="2024-02-08 23:24:51.469779431 +0000 UTC m=+62.850579162" observedRunningTime="2024-02-08 23:24:52.237483765 +0000 UTC m=+63.618283505" watchObservedRunningTime="2024-02-08 23:24:53.232476651 +0000 UTC m=+64.613276391" Feb 8 23:24:53.237914 env[1119]: time="2024-02-08T23:24:53.237859812Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd\"" Feb 8 23:24:53.238422 env[1119]: time="2024-02-08T23:24:53.238362686Z" level=info msg="StartContainer for \"9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd\"" Feb 8 23:24:53.252800 systemd[1]: Started cri-containerd-9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd.scope. Feb 8 23:24:53.272668 systemd[1]: cri-containerd-9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd.scope: Deactivated successfully. Feb 8 23:24:53.277722 env[1119]: time="2024-02-08T23:24:53.277671671Z" level=info msg="StartContainer for \"9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd\" returns successfully" Feb 8 23:24:53.294943 env[1119]: time="2024-02-08T23:24:53.294889548Z" level=info msg="shim disconnected" id=9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd Feb 8 23:24:53.294943 env[1119]: time="2024-02-08T23:24:53.294944581Z" level=warning msg="cleaning up after shim disconnected" id=9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd namespace=k8s.io Feb 8 23:24:53.295137 env[1119]: time="2024-02-08T23:24:53.294954360Z" level=info msg="cleaning up dead shim" Feb 8 23:24:53.301472 env[1119]: time="2024-02-08T23:24:53.301425022Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:24:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3402 runtime=io.containerd.runc.v2\n" Feb 8 23:24:53.772231 systemd[1]: run-containerd-runc-k8s.io-9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd-runc.NETsxV.mount: Deactivated successfully. Feb 8 23:24:53.772361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd-rootfs.mount: Deactivated successfully. Feb 8 23:24:53.981736 kubelet[1392]: E0208 23:24:53.981679 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:54.184393 kubelet[1392]: E0208 23:24:54.184277 1392 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:24:54.224306 kubelet[1392]: E0208 23:24:54.224282 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:54.226197 env[1119]: time="2024-02-08T23:24:54.226159343Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:24:54.240238 env[1119]: time="2024-02-08T23:24:54.240188689Z" level=info msg="CreateContainer within sandbox \"03f26c3ed2fb4adb7b5ae2067a7882b8cc8bbad719590afee44570dd75335f06\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96\"" Feb 8 23:24:54.240712 env[1119]: time="2024-02-08T23:24:54.240663861Z" level=info msg="StartContainer for \"612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96\"" Feb 8 23:24:54.254459 systemd[1]: Started cri-containerd-612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96.scope. Feb 8 23:24:54.278673 env[1119]: time="2024-02-08T23:24:54.278616342Z" level=info msg="StartContainer for \"612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96\" returns successfully" Feb 8 23:24:54.502397 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:24:54.982294 kubelet[1392]: E0208 23:24:54.982235 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:55.186796 kubelet[1392]: W0208 23:24:55.186759 1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7261f4_01fd_41b0_bd7a_d4d3964f029f.slice/cri-containerd-c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37.scope WatchSource:0}: task c81dde4e2340266b38f603f37bfe1f879d2820fa978a557d82bc74393047ba37 not found: not found Feb 8 23:24:55.227735 kubelet[1392]: E0208 23:24:55.227711 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:55.238455 kubelet[1392]: I0208 23:24:55.238392 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d7m7f" podStartSLOduration=5.238356962 podCreationTimestamp="2024-02-08 23:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:24:55.237963273 +0000 UTC m=+66.618763003" watchObservedRunningTime="2024-02-08 23:24:55.238356962 +0000 UTC m=+66.619156692" Feb 8 23:24:55.982738 kubelet[1392]: E0208 23:24:55.982660 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:56.560396 kubelet[1392]: E0208 23:24:56.560344 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:56.925628 systemd-networkd[1025]: lxc_health: Link UP Feb 8 23:24:56.933167 systemd-networkd[1025]: lxc_health: Gained carrier Feb 8 23:24:56.933402 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:24:56.983616 kubelet[1392]: E0208 23:24:56.983562 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:57.049145 kubelet[1392]: E0208 23:24:57.049102 1392 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56184->127.0.0.1:35599: write tcp 127.0.0.1:56184->127.0.0.1:35599: write: broken pipe Feb 8 23:24:57.984305 kubelet[1392]: E0208 23:24:57.984257 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:58.277521 systemd-networkd[1025]: lxc_health: Gained IPv6LL Feb 8 23:24:58.296699 kubelet[1392]: W0208 23:24:58.296647 1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7261f4_01fd_41b0_bd7a_d4d3964f029f.slice/cri-containerd-00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d.scope WatchSource:0}: task 00858eb49bdad0601598a8cc0a5b60076d8d8694a60d8f8ef014da57765e920d not found: not found Feb 8 23:24:58.561183 kubelet[1392]: E0208 23:24:58.561075 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:58.984499 kubelet[1392]: E0208 23:24:58.984442 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:24:59.104561 systemd[1]: run-containerd-runc-k8s.io-612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96-runc.pblsmW.mount: Deactivated successfully. Feb 8 23:24:59.234679 kubelet[1392]: E0208 23:24:59.234560 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:24:59.985393 kubelet[1392]: E0208 23:24:59.985329 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:00.985623 kubelet[1392]: E0208 23:25:00.985556 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:01.406227 kubelet[1392]: W0208 23:25:01.406101 1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7261f4_01fd_41b0_bd7a_d4d3964f029f.slice/cri-containerd-4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548.scope WatchSource:0}: task 4cbd61600252551a42965487f627bacd91c54214b9c588366dea16646260a548 not found: not found Feb 8 23:25:01.986575 kubelet[1392]: E0208 23:25:01.986537 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:02.987530 kubelet[1392]: E0208 23:25:02.987491 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:03.330813 systemd[1]: run-containerd-runc-k8s.io-612e941ad10ae2fb3a365b03dfea31d873b9d06aff4ec287b0810425f5a22d96-runc.tPjTfq.mount: Deactivated successfully. Feb 8 23:25:03.988476 kubelet[1392]: E0208 23:25:03.988422 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:04.080597 kubelet[1392]: E0208 23:25:04.080545 1392 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:25:04.514689 kubelet[1392]: W0208 23:25:04.514637 1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7261f4_01fd_41b0_bd7a_d4d3964f029f.slice/cri-containerd-9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd.scope WatchSource:0}: task 9fd01ef92e1854eef3e327f27cee075539a32dd0c53a8f79ac239bd217d82cdd not found: not found Feb 8 23:25:04.989154 kubelet[1392]: E0208 23:25:04.989105 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:05.990036 kubelet[1392]: E0208 23:25:05.989929 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:25:06.990339 kubelet[1392]: E0208 23:25:06.990286 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"