Feb 12 20:20:26.775952 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:20:26.775971 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:20:26.775979 kernel: BIOS-provided physical RAM map: Feb 12 20:20:26.775985 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:20:26.775990 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:20:26.775995 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:20:26.776002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:20:26.776008 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:20:26.776014 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:20:26.776020 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:20:26.776025 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:20:26.776031 kernel: NX (Execute Disable) protection: active Feb 12 20:20:26.776036 kernel: SMBIOS 2.8 present. Feb 12 20:20:26.776042 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:20:26.776050 kernel: Hypervisor detected: KVM Feb 12 20:20:26.776056 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:20:26.776062 kernel: kvm-clock: cpu 0, msr 4dfaa001, primary cpu clock Feb 12 20:20:26.776068 kernel: kvm-clock: using sched offset of 2233403195 cycles Feb 12 20:20:26.776074 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:20:26.776087 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:20:26.776094 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:20:26.776101 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:20:26.776107 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:20:26.776114 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:20:26.776121 kernel: Using GB pages for direct mapping Feb 12 20:20:26.776127 kernel: ACPI: Early table checksum verification disabled Feb 12 20:20:26.776133 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:20:26.776139 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776145 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776161 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776167 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:20:26.776173 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776180 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776186 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:20:26.776192 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:20:26.776198 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:20:26.776204 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:20:26.776210 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:20:26.776216 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:20:26.776223 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:20:26.776233 kernel: No NUMA configuration found Feb 12 20:20:26.776239 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:20:26.776246 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:20:26.776252 kernel: Zone ranges: Feb 12 20:20:26.776259 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:20:26.776265 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:20:26.776273 kernel: Normal empty Feb 12 20:20:26.776279 kernel: Movable zone start for each node Feb 12 20:20:26.776286 kernel: Early memory node ranges Feb 12 20:20:26.776292 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:20:26.776299 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:20:26.776305 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:20:26.776312 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:20:26.776318 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:20:26.776324 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:20:26.776332 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:20:26.776338 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:20:26.776345 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:20:26.776351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:20:26.776358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:20:26.776364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:20:26.776371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:20:26.776377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:20:26.776384 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:20:26.776391 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:20:26.776397 kernel: TSC deadline timer available Feb 12 20:20:26.776404 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:20:26.776410 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:20:26.776417 kernel: kvm-guest: setup PV sched yield Feb 12 20:20:26.776423 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:20:26.776430 kernel: Booting paravirtualized kernel on KVM Feb 12 20:20:26.776436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:20:26.776443 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:20:26.776451 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:20:26.776457 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:20:26.776463 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:20:26.776469 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:20:26.776476 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:20:26.776482 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:20:26.776489 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:20:26.776495 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:20:26.776501 kernel: Policy zone: DMA32 Feb 12 20:20:26.776509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:20:26.776517 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:20:26.776523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:20:26.776530 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:20:26.776537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:20:26.776543 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:20:26.776550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:20:26.776556 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:20:26.776563 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:20:26.776571 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:20:26.776578 kernel: rcu: RCU event tracing is enabled. Feb 12 20:20:26.776584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:20:26.776591 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:20:26.776597 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:20:26.776604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:20:26.776610 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:20:26.776617 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:20:26.776623 kernel: random: crng init done Feb 12 20:20:26.776631 kernel: Console: colour VGA+ 80x25 Feb 12 20:20:26.776637 kernel: printk: console [ttyS0] enabled Feb 12 20:20:26.776643 kernel: ACPI: Core revision 20210730 Feb 12 20:20:26.776650 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:20:26.776657 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:20:26.776663 kernel: x2apic enabled Feb 12 20:20:26.776670 kernel: Switched APIC routing to physical x2apic. Feb 12 20:20:26.776676 kernel: kvm-guest: setup PV IPIs Feb 12 20:20:26.776682 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:20:26.776690 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:20:26.776696 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:20:26.776703 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:20:26.776709 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:20:26.776716 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:20:26.776722 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:20:26.776729 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:20:26.776735 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:20:26.776742 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:20:26.776754 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:20:26.776760 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:20:26.776767 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:20:26.776775 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:20:26.776782 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:20:26.776789 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:20:26.776796 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:20:26.776803 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:20:26.776810 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:20:26.776817 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:20:26.776824 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:20:26.776831 kernel: LSM: Security Framework initializing Feb 12 20:20:26.776838 kernel: SELinux: Initializing. Feb 12 20:20:26.776845 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:20:26.776851 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:20:26.776858 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:20:26.776868 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:20:26.776877 kernel: ... version: 0 Feb 12 20:20:26.776886 kernel: ... bit width: 48 Feb 12 20:20:26.776901 kernel: ... generic registers: 6 Feb 12 20:20:26.776911 kernel: ... value mask: 0000ffffffffffff Feb 12 20:20:26.776919 kernel: ... max period: 00007fffffffffff Feb 12 20:20:26.776925 kernel: ... fixed-purpose events: 0 Feb 12 20:20:26.776932 kernel: ... event mask: 000000000000003f Feb 12 20:20:26.776939 kernel: signal: max sigframe size: 1776 Feb 12 20:20:26.776948 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:20:26.776955 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:20:26.776962 kernel: x86: Booting SMP configuration: Feb 12 20:20:26.776969 kernel: .... node #0, CPUs: #1 Feb 12 20:20:26.776975 kernel: kvm-clock: cpu 1, msr 4dfaa041, secondary cpu clock Feb 12 20:20:26.776982 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:20:26.776989 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:20:26.776996 kernel: #2 Feb 12 20:20:26.777003 kernel: kvm-clock: cpu 2, msr 4dfaa081, secondary cpu clock Feb 12 20:20:26.777009 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:20:26.777017 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:20:26.777024 kernel: #3 Feb 12 20:20:26.777031 kernel: kvm-clock: cpu 3, msr 4dfaa0c1, secondary cpu clock Feb 12 20:20:26.777037 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:20:26.777044 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:20:26.777051 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:20:26.777058 kernel: smpboot: Max logical packages: 1 Feb 12 20:20:26.777065 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:20:26.777071 kernel: devtmpfs: initialized Feb 12 20:20:26.777085 kernel: x86/mm: Memory block size: 128MB Feb 12 20:20:26.777093 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:20:26.777100 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:20:26.777107 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:20:26.777114 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:20:26.777121 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:20:26.777127 kernel: audit: type=2000 audit(1707769226.799:1): state=initialized audit_enabled=0 res=1 Feb 12 20:20:26.777134 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:20:26.777141 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:20:26.777159 kernel: cpuidle: using governor menu Feb 12 20:20:26.777166 kernel: ACPI: bus type PCI registered Feb 12 20:20:26.777173 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:20:26.777181 kernel: dca service started, version 1.12.1 Feb 12 20:20:26.777187 kernel: PCI: Using configuration type 1 for base access Feb 12 20:20:26.777194 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:20:26.777201 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:20:26.777208 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:20:26.777215 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:20:26.777223 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:20:26.777229 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:20:26.777236 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:20:26.777243 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:20:26.777250 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:20:26.777256 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:20:26.777263 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:20:26.777270 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:20:26.777277 kernel: ACPI: Interpreter enabled Feb 12 20:20:26.777283 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:20:26.777291 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:20:26.777298 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:20:26.777305 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:20:26.777312 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:20:26.777433 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:20:26.777446 kernel: acpiphp: Slot [3] registered Feb 12 20:20:26.777453 kernel: acpiphp: Slot [4] registered Feb 12 20:20:26.777460 kernel: acpiphp: Slot [5] registered Feb 12 20:20:26.777468 kernel: acpiphp: Slot [6] registered Feb 12 20:20:26.777475 kernel: acpiphp: Slot [7] registered Feb 12 20:20:26.777482 kernel: acpiphp: Slot [8] registered Feb 12 20:20:26.777488 kernel: acpiphp: Slot [9] registered Feb 12 20:20:26.777495 kernel: acpiphp: Slot [10] registered Feb 12 20:20:26.777502 kernel: acpiphp: Slot [11] registered Feb 12 20:20:26.777508 kernel: acpiphp: Slot [12] registered Feb 12 20:20:26.777515 kernel: acpiphp: Slot [13] registered Feb 12 20:20:26.777522 kernel: acpiphp: Slot [14] registered Feb 12 20:20:26.777530 kernel: acpiphp: Slot [15] registered Feb 12 20:20:26.777537 kernel: acpiphp: Slot [16] registered Feb 12 20:20:26.777543 kernel: acpiphp: Slot [17] registered Feb 12 20:20:26.777550 kernel: acpiphp: Slot [18] registered Feb 12 20:20:26.777557 kernel: acpiphp: Slot [19] registered Feb 12 20:20:26.777563 kernel: acpiphp: Slot [20] registered Feb 12 20:20:26.777570 kernel: acpiphp: Slot [21] registered Feb 12 20:20:26.777577 kernel: acpiphp: Slot [22] registered Feb 12 20:20:26.777583 kernel: acpiphp: Slot [23] registered Feb 12 20:20:26.777590 kernel: acpiphp: Slot [24] registered Feb 12 20:20:26.777598 kernel: acpiphp: Slot [25] registered Feb 12 20:20:26.777605 kernel: acpiphp: Slot [26] registered Feb 12 20:20:26.777611 kernel: acpiphp: Slot [27] registered Feb 12 20:20:26.777618 kernel: acpiphp: Slot [28] registered Feb 12 20:20:26.777625 kernel: acpiphp: Slot [29] registered Feb 12 20:20:26.777632 kernel: acpiphp: Slot [30] registered Feb 12 20:20:26.777638 kernel: acpiphp: Slot [31] registered Feb 12 20:20:26.777645 kernel: PCI host bridge to bus 0000:00 Feb 12 20:20:26.777731 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:20:26.777808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:20:26.777875 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:20:26.777935 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:20:26.777996 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:20:26.778055 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:20:26.778162 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:20:26.778250 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:20:26.778348 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:20:26.778420 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:20:26.778489 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:20:26.778555 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:20:26.778623 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:20:26.778690 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:20:26.778784 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:20:26.778855 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:20:26.778923 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:20:26.779005 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:20:26.779074 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:20:26.779170 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:20:26.779279 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:20:26.779445 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:20:26.779554 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:20:26.779628 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:20:26.779701 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:20:26.779769 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:20:26.779850 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:20:26.779923 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:20:26.779993 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:20:26.780060 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:20:26.780163 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:20:26.780235 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:20:26.780304 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:20:26.780372 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:20:26.780443 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:20:26.780452 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:20:26.780460 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:20:26.780467 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:20:26.780473 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:20:26.780480 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:20:26.780487 kernel: iommu: Default domain type: Translated Feb 12 20:20:26.780494 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:20:26.780560 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:20:26.780667 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:20:26.780740 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:20:26.780751 kernel: vgaarb: loaded Feb 12 20:20:26.780758 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:20:26.780765 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:20:26.780772 kernel: PTP clock support registered Feb 12 20:20:26.780779 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:20:26.780785 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:20:26.780795 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:20:26.780802 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:20:26.780809 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:20:26.780815 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:20:26.780822 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:20:26.780829 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:20:26.780836 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:20:26.780843 kernel: pnp: PnP ACPI init Feb 12 20:20:26.780927 kernel: pnp 00:02: [dma 2] Feb 12 20:20:26.780940 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:20:26.780947 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:20:26.780954 kernel: NET: Registered PF_INET protocol family Feb 12 20:20:26.780961 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:20:26.780968 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:20:26.780975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:20:26.780982 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:20:26.780989 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:20:26.780997 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:20:26.781004 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:20:26.781011 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:20:26.781018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:20:26.781025 kernel: NET: Registered PF_XDP protocol family Feb 12 20:20:26.781093 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:20:26.781166 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:20:26.781227 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:20:26.781287 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:20:26.781350 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:20:26.781440 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:20:26.781517 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:20:26.781585 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:20:26.781595 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:20:26.781602 kernel: Initialise system trusted keyrings Feb 12 20:20:26.781609 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:20:26.781616 kernel: Key type asymmetric registered Feb 12 20:20:26.781625 kernel: Asymmetric key parser 'x509' registered Feb 12 20:20:26.781632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:20:26.781639 kernel: io scheduler mq-deadline registered Feb 12 20:20:26.781646 kernel: io scheduler kyber registered Feb 12 20:20:26.781652 kernel: io scheduler bfq registered Feb 12 20:20:26.781659 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:20:26.781667 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:20:26.781674 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:20:26.781681 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:20:26.781689 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:20:26.781696 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:20:26.781703 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:20:26.781710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:20:26.781717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:20:26.781724 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:20:26.781801 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:20:26.781866 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:20:26.781932 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:20:26 UTC (1707769226) Feb 12 20:20:26.781994 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:20:26.782003 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:20:26.782010 kernel: Segment Routing with IPv6 Feb 12 20:20:26.782017 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:20:26.782024 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:20:26.782030 kernel: Key type dns_resolver registered Feb 12 20:20:26.782037 kernel: IPI shorthand broadcast: enabled Feb 12 20:20:26.782044 kernel: sched_clock: Marking stable (373214610, 69774849)->(448573957, -5584498) Feb 12 20:20:26.782053 kernel: registered taskstats version 1 Feb 12 20:20:26.782060 kernel: Loading compiled-in X.509 certificates Feb 12 20:20:26.782067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:20:26.782074 kernel: Key type .fscrypt registered Feb 12 20:20:26.782087 kernel: Key type fscrypt-provisioning registered Feb 12 20:20:26.782094 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:20:26.782101 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:20:26.782108 kernel: ima: No architecture policies found Feb 12 20:20:26.782115 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:20:26.782123 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:20:26.782130 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:20:26.782137 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:20:26.782144 kernel: Run /init as init process Feb 12 20:20:26.782161 kernel: with arguments: Feb 12 20:20:26.782167 kernel: /init Feb 12 20:20:26.782174 kernel: with environment: Feb 12 20:20:26.782190 kernel: HOME=/ Feb 12 20:20:26.782198 kernel: TERM=linux Feb 12 20:20:26.782206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:20:26.782215 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:20:26.782224 systemd[1]: Detected virtualization kvm. Feb 12 20:20:26.782232 systemd[1]: Detected architecture x86-64. Feb 12 20:20:26.782240 systemd[1]: Running in initrd. Feb 12 20:20:26.782247 systemd[1]: No hostname configured, using default hostname. Feb 12 20:20:26.782254 systemd[1]: Hostname set to . Feb 12 20:20:26.782263 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:20:26.782271 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:20:26.782278 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:20:26.782285 systemd[1]: Reached target cryptsetup.target. Feb 12 20:20:26.782293 systemd[1]: Reached target paths.target. Feb 12 20:20:26.782300 systemd[1]: Reached target slices.target. Feb 12 20:20:26.782308 systemd[1]: Reached target swap.target. Feb 12 20:20:26.782315 systemd[1]: Reached target timers.target. Feb 12 20:20:26.782324 systemd[1]: Listening on iscsid.socket. Feb 12 20:20:26.782331 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:20:26.782339 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:20:26.782346 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:20:26.782354 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:20:26.782361 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:20:26.782369 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:20:26.782376 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:20:26.782385 systemd[1]: Reached target sockets.target. Feb 12 20:20:26.782392 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:20:26.782400 systemd[1]: Finished network-cleanup.service. Feb 12 20:20:26.782407 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:20:26.782415 systemd[1]: Starting systemd-journald.service... Feb 12 20:20:26.782422 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:20:26.782431 systemd[1]: Starting systemd-resolved.service... Feb 12 20:20:26.782439 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:20:26.782446 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:20:26.782454 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:20:26.782463 systemd-journald[198]: Journal started Feb 12 20:20:26.782502 systemd-journald[198]: Runtime Journal (/run/log/journal/7095ac20479149de96f31ba9596e5fef) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:20:26.779535 systemd-modules-load[199]: Inserted module 'overlay' Feb 12 20:20:26.809347 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:20:26.809367 kernel: Bridge firewalling registered Feb 12 20:20:26.809380 systemd[1]: Started systemd-journald.service. Feb 12 20:20:26.809399 kernel: audit: type=1130 audit(1707769226.806:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.800620 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 12 20:20:26.804595 systemd-resolved[200]: Positive Trust Anchors: Feb 12 20:20:26.815442 kernel: audit: type=1130 audit(1707769226.809:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.815456 kernel: audit: type=1130 audit(1707769226.813:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.804604 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:20:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.804639 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:20:26.823069 kernel: audit: type=1130 audit(1707769226.816:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.807255 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 12 20:20:26.810130 systemd[1]: Started systemd-resolved.service. Feb 12 20:20:26.813352 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:20:26.816322 systemd[1]: Reached target nss-lookup.target. Feb 12 20:20:26.819355 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:20:26.826791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:20:26.829169 kernel: SCSI subsystem initialized Feb 12 20:20:26.836922 kernel: audit: type=1130 audit(1707769226.833:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.833319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:20:26.841523 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:20:26.841544 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:20:26.841553 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:20:26.844167 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:20:26.847314 kernel: audit: type=1130 audit(1707769226.844:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.845166 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:20:26.848231 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 12 20:20:26.848771 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:20:26.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.849712 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:20:26.852795 kernel: audit: type=1130 audit(1707769226.848:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.857146 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:20:26.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.860165 kernel: audit: type=1130 audit(1707769226.857:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.862276 dracut-cmdline[217]: dracut-dracut-053 Feb 12 20:20:26.863968 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:20:26.911167 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:20:26.921172 kernel: iscsi: registered transport (tcp) Feb 12 20:20:26.939485 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:20:26.939513 kernel: QLogic iSCSI HBA Driver Feb 12 20:20:26.959020 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:20:26.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:26.961938 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:20:26.962908 kernel: audit: type=1130 audit(1707769226.959:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:27.006174 kernel: raid6: avx2x4 gen() 30870 MB/s Feb 12 20:20:27.023163 kernel: raid6: avx2x4 xor() 8596 MB/s Feb 12 20:20:27.040165 kernel: raid6: avx2x2 gen() 32446 MB/s Feb 12 20:20:27.057162 kernel: raid6: avx2x2 xor() 19204 MB/s Feb 12 20:20:27.074162 kernel: raid6: avx2x1 gen() 25854 MB/s Feb 12 20:20:27.091165 kernel: raid6: avx2x1 xor() 15332 MB/s Feb 12 20:20:27.108178 kernel: raid6: sse2x4 gen() 14797 MB/s Feb 12 20:20:27.125170 kernel: raid6: sse2x4 xor() 7419 MB/s Feb 12 20:20:27.142162 kernel: raid6: sse2x2 gen() 16308 MB/s Feb 12 20:20:27.159169 kernel: raid6: sse2x2 xor() 9834 MB/s Feb 12 20:20:27.176164 kernel: raid6: sse2x1 gen() 12486 MB/s Feb 12 20:20:27.193172 kernel: raid6: sse2x1 xor() 7798 MB/s Feb 12 20:20:27.193191 kernel: raid6: using algorithm avx2x2 gen() 32446 MB/s Feb 12 20:20:27.193201 kernel: raid6: .... xor() 19204 MB/s, rmw enabled Feb 12 20:20:27.194163 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:20:27.205167 kernel: xor: automatically using best checksumming function avx Feb 12 20:20:27.291177 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:20:27.298848 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:20:27.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:27.299000 audit: BPF prog-id=7 op=LOAD Feb 12 20:20:27.299000 audit: BPF prog-id=8 op=LOAD Feb 12 20:20:27.300347 systemd[1]: Starting systemd-udevd.service... Feb 12 20:20:27.311030 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 20:20:27.314751 systemd[1]: Started systemd-udevd.service. Feb 12 20:20:27.315974 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:20:27.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:27.325041 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 12 20:20:27.349085 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:20:27.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:27.350211 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:20:27.382061 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:20:27.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:27.407592 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:20:27.412638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:20:27.412664 kernel: GPT:9289727 != 19775487 Feb 12 20:20:27.412676 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:20:27.412689 kernel: GPT:9289727 != 19775487 Feb 12 20:20:27.412700 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:20:27.412712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:20:27.418169 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:20:27.428941 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:20:27.428969 kernel: AES CTR mode by8 optimization enabled Feb 12 20:20:27.443575 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:20:27.474115 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Feb 12 20:20:27.474917 kernel: libata version 3.00 loaded. Feb 12 20:20:27.474936 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:20:27.475094 kernel: scsi host0: ata_piix Feb 12 20:20:27.475241 kernel: scsi host1: ata_piix Feb 12 20:20:27.475368 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:20:27.475386 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:20:27.474107 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:20:27.480643 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:20:27.489205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:20:27.493968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:20:27.495554 systemd[1]: Starting disk-uuid.service... Feb 12 20:20:27.502409 disk-uuid[534]: Primary Header is updated. Feb 12 20:20:27.502409 disk-uuid[534]: Secondary Entries is updated. Feb 12 20:20:27.502409 disk-uuid[534]: Secondary Header is updated. Feb 12 20:20:27.504981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:20:27.507168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:20:27.510210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:20:27.616189 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:20:27.616247 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:20:27.644176 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:20:27.644323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:20:27.661239 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:20:28.508863 disk-uuid[535]: The operation has completed successfully. Feb 12 20:20:28.510628 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:20:28.532266 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:20:28.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.532336 systemd[1]: Finished disk-uuid.service. Feb 12 20:20:28.536205 systemd[1]: Starting verity-setup.service... Feb 12 20:20:28.547167 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:20:28.564830 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:20:28.566788 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:20:28.568752 systemd[1]: Finished verity-setup.service. Feb 12 20:20:28.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.621179 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:20:28.621569 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:20:28.622307 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:20:28.623084 systemd[1]: Starting ignition-setup.service... Feb 12 20:20:28.625000 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:20:28.630452 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:20:28.630480 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:20:28.630493 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:20:28.637102 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:20:28.678776 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:20:28.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.679000 audit: BPF prog-id=9 op=LOAD Feb 12 20:20:28.680572 systemd[1]: Starting systemd-networkd.service... Feb 12 20:20:28.696246 systemd[1]: Finished ignition-setup.service. Feb 12 20:20:28.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.697521 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:20:28.700500 systemd-networkd[710]: lo: Link UP Feb 12 20:20:28.700509 systemd-networkd[710]: lo: Gained carrier Feb 12 20:20:28.700981 systemd-networkd[710]: Enumeration completed Feb 12 20:20:28.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.701236 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:20:28.701871 systemd[1]: Started systemd-networkd.service. Feb 12 20:20:28.702725 systemd-networkd[710]: eth0: Link UP Feb 12 20:20:28.702730 systemd-networkd[710]: eth0: Gained carrier Feb 12 20:20:28.703015 systemd[1]: Reached target network.target. Feb 12 20:20:28.704495 systemd[1]: Starting iscsiuio.service... Feb 12 20:20:28.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.708447 systemd[1]: Started iscsiuio.service. Feb 12 20:20:28.710089 systemd[1]: Starting iscsid.service... Feb 12 20:20:28.713529 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:20:28.713529 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:20:28.713529 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:20:28.713529 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:20:28.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.721128 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:20:28.721128 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:20:28.715057 systemd[1]: Started iscsid.service. Feb 12 20:20:28.716264 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:20:28.719821 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:20:28.729275 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:20:28.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.730108 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:20:28.731252 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:20:28.732605 systemd[1]: Reached target remote-fs.target. Feb 12 20:20:28.733744 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:20:28.739728 ignition[713]: Ignition 2.14.0 Feb 12 20:20:28.739737 ignition[713]: Stage: fetch-offline Feb 12 20:20:28.740111 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:20:28.739799 ignition[713]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:28.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.739807 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:28.739887 ignition[713]: parsed url from cmdline: "" Feb 12 20:20:28.739889 ignition[713]: no config URL provided Feb 12 20:20:28.739893 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:20:28.739899 ignition[713]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:20:28.739914 ignition[713]: op(1): [started] loading QEMU firmware config module Feb 12 20:20:28.739918 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:20:28.743510 ignition[713]: op(1): [finished] loading QEMU firmware config module Feb 12 20:20:28.756117 ignition[713]: parsing config with SHA512: d50ba4115cb8ecccaf22b00d7576aa5cf0c5850fb0d169230937d7fd9499984bf29c18918c824d82e364f88fe1df2f040321fb5034370108f29e1e2041be15ac Feb 12 20:20:28.771596 unknown[713]: fetched base config from "system" Feb 12 20:20:28.771610 unknown[713]: fetched user config from "qemu" Feb 12 20:20:28.772988 ignition[713]: fetch-offline: fetch-offline passed Feb 12 20:20:28.773081 ignition[713]: Ignition finished successfully Feb 12 20:20:28.773952 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:20:28.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.774784 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:20:28.775446 systemd[1]: Starting ignition-kargs.service... Feb 12 20:20:28.783477 ignition[738]: Ignition 2.14.0 Feb 12 20:20:28.783486 ignition[738]: Stage: kargs Feb 12 20:20:28.783574 ignition[738]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:28.783582 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:28.784439 ignition[738]: kargs: kargs passed Feb 12 20:20:28.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.785309 systemd[1]: Finished ignition-kargs.service. Feb 12 20:20:28.784470 ignition[738]: Ignition finished successfully Feb 12 20:20:28.787005 systemd[1]: Starting ignition-disks.service... Feb 12 20:20:28.792813 ignition[744]: Ignition 2.14.0 Feb 12 20:20:28.792821 ignition[744]: Stage: disks Feb 12 20:20:28.792909 ignition[744]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:28.792917 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:28.793783 ignition[744]: disks: disks passed Feb 12 20:20:28.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.794628 systemd[1]: Finished ignition-disks.service. Feb 12 20:20:28.793814 ignition[744]: Ignition finished successfully Feb 12 20:20:28.795746 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:20:28.796708 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:20:28.797278 systemd[1]: Reached target local-fs.target. Feb 12 20:20:28.798293 systemd[1]: Reached target sysinit.target. Feb 12 20:20:28.799309 systemd[1]: Reached target basic.target. Feb 12 20:20:28.800157 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:20:28.809348 systemd-fsck[752]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:20:28.814357 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:20:28.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.815705 systemd[1]: Mounting sysroot.mount... Feb 12 20:20:28.820161 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:20:28.820569 systemd[1]: Mounted sysroot.mount. Feb 12 20:20:28.821101 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:20:28.822783 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:20:28.823518 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:20:28.823546 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:20:28.823563 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:20:28.824875 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:20:28.826240 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:20:28.829722 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:20:28.832428 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:20:28.834635 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:20:28.836818 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:20:28.857289 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:20:28.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.858361 systemd[1]: Starting ignition-mount.service... Feb 12 20:20:28.859360 systemd[1]: Starting sysroot-boot.service... Feb 12 20:20:28.862775 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:20:28.869183 ignition[804]: INFO : Ignition 2.14.0 Feb 12 20:20:28.869901 ignition[804]: INFO : Stage: mount Feb 12 20:20:28.870368 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:28.870368 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:28.872280 ignition[804]: INFO : mount: mount passed Feb 12 20:20:28.872795 ignition[804]: INFO : Ignition finished successfully Feb 12 20:20:28.873440 systemd[1]: Finished ignition-mount.service. Feb 12 20:20:28.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:28.878242 systemd[1]: Finished sysroot-boot.service. Feb 12 20:20:28.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:29.573701 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:20:29.580581 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 12 20:20:29.580607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:20:29.580616 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:20:29.581638 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:20:29.584168 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:20:29.584983 systemd[1]: Starting ignition-files.service... Feb 12 20:20:29.596894 ignition[834]: INFO : Ignition 2.14.0 Feb 12 20:20:29.596894 ignition[834]: INFO : Stage: files Feb 12 20:20:29.598019 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:29.598019 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:29.599503 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:20:29.600667 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:20:29.600667 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:20:29.603105 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:20:29.604170 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:20:29.605113 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:20:29.605113 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:20:29.605113 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:20:29.604515 unknown[834]: wrote ssh authorized keys file for user: core Feb 12 20:20:30.071742 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:20:30.229171 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:20:30.229171 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:20:30.232616 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:20:30.232616 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:20:30.550311 systemd-networkd[710]: eth0: Gained IPv6LL Feb 12 20:20:30.655772 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:20:30.737947 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:20:30.740096 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:20:30.740096 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:20:30.740096 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:20:30.803700 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:20:30.985178 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 20:20:30.987221 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:20:30.987221 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:20:30.987221 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:20:31.040245 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 20:20:31.393975 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:20:31.396246 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 12 20:20:31.396246 ignition[834]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:20:31.426638 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 12 20:20:31.426665 kernel: audit: type=1130 audit(1707769231.417:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.426681 kernel: audit: type=1130 audit(1707769231.426:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:20:31.426782 ignition[834]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:20:31.426782 ignition[834]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:20:31.426782 ignition[834]: INFO : files: files passed Feb 12 20:20:31.426782 ignition[834]: INFO : Ignition finished successfully Feb 12 20:20:31.455395 kernel: audit: type=1131 audit(1707769231.426:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.455422 kernel: audit: type=1130 audit(1707769231.431:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.455437 kernel: audit: type=1130 audit(1707769231.448:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.455450 kernel: audit: type=1131 audit(1707769231.448:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.416497 systemd[1]: Finished ignition-files.service. Feb 12 20:20:31.418771 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:20:31.457937 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:20:31.422491 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:20:31.460483 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:20:31.423189 systemd[1]: Starting ignition-quench.service... Feb 12 20:20:31.425187 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:20:31.425277 systemd[1]: Finished ignition-quench.service. Feb 12 20:20:31.426793 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:20:31.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.431483 systemd[1]: Reached target ignition-complete.target. Feb 12 20:20:31.436140 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:20:31.469363 kernel: audit: type=1130 audit(1707769231.465:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.447031 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:20:31.447101 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:20:31.448645 systemd[1]: Reached target initrd-fs.target. Feb 12 20:20:31.454047 systemd[1]: Reached target initrd.target. Feb 12 20:20:31.455432 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:20:31.456243 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:20:31.464258 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:20:31.466198 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:20:31.480335 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:20:31.480758 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:20:31.481854 systemd[1]: Stopped target timers.target. Feb 12 20:20:31.483105 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:20:31.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.483198 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:20:31.487975 kernel: audit: type=1131 audit(1707769231.484:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.484398 systemd[1]: Stopped target initrd.target. Feb 12 20:20:31.486865 systemd[1]: Stopped target basic.target. Feb 12 20:20:31.488463 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:20:31.489402 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:20:31.490748 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:20:31.491196 systemd[1]: Stopped target remote-fs.target. Feb 12 20:20:31.492815 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:20:31.494419 systemd[1]: Stopped target sysinit.target. Feb 12 20:20:31.494830 systemd[1]: Stopped target local-fs.target. Feb 12 20:20:31.496234 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:20:31.497254 systemd[1]: Stopped target swap.target. Feb 12 20:20:31.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.502168 kernel: audit: type=1131 audit(1707769231.498:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.498414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:20:31.498532 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:20:31.506492 kernel: audit: type=1131 audit(1707769231.503:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.499476 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:20:31.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.502442 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:20:31.502523 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:20:31.503789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:20:31.503863 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:20:31.506863 systemd[1]: Stopped target paths.target. Feb 12 20:20:31.507983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:20:31.511181 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:20:31.511645 systemd[1]: Stopped target slices.target. Feb 12 20:20:31.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.512936 systemd[1]: Stopped target sockets.target. Feb 12 20:20:31.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.514185 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:20:31.518717 iscsid[717]: iscsid shutting down. Feb 12 20:20:31.514266 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:20:31.515622 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:20:31.515733 systemd[1]: Stopped ignition-files.service. Feb 12 20:20:31.516783 systemd[1]: Stopping ignition-mount.service... Feb 12 20:20:31.517671 systemd[1]: Stopping iscsid.service... Feb 12 20:20:31.523354 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:20:31.524588 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:20:31.525362 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:20:31.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.527504 ignition[874]: INFO : Ignition 2.14.0 Feb 12 20:20:31.527504 ignition[874]: INFO : Stage: umount Feb 12 20:20:31.527504 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:20:31.527504 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:20:31.527504 ignition[874]: INFO : umount: umount passed Feb 12 20:20:31.527504 ignition[874]: INFO : Ignition finished successfully Feb 12 20:20:31.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.526277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:20:31.526393 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:20:31.528805 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:20:31.528878 systemd[1]: Stopped iscsid.service. Feb 12 20:20:31.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.529683 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:20:31.529741 systemd[1]: Stopped ignition-mount.service. Feb 12 20:20:31.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.531000 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:20:31.531061 systemd[1]: Closed iscsid.socket. Feb 12 20:20:31.532195 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:20:31.532225 systemd[1]: Stopped ignition-disks.service. Feb 12 20:20:31.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.533313 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:20:31.533340 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:20:31.534467 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:20:31.534496 systemd[1]: Stopped ignition-setup.service. Feb 12 20:20:31.535142 systemd[1]: Stopping iscsiuio.service... Feb 12 20:20:31.537362 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:20:31.537717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:20:31.537774 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:20:31.538854 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:20:31.538916 systemd[1]: Stopped iscsiuio.service. Feb 12 20:20:31.539972 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:20:31.540030 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:20:31.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.541646 systemd[1]: Stopped target network.target. Feb 12 20:20:31.542667 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:20:31.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.542694 systemd[1]: Closed iscsiuio.socket. Feb 12 20:20:31.543526 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:20:31.558000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:20:31.543555 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:20:31.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.544774 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:20:31.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.545845 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:20:31.552782 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:20:31.552846 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:20:31.553187 systemd-networkd[710]: eth0: DHCPv6 lease lost Feb 12 20:20:31.565000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:20:31.554937 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:20:31.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.555007 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:20:31.556811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:20:31.556832 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:20:31.558227 systemd[1]: Stopping network-cleanup.service... Feb 12 20:20:31.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.559112 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:20:31.559156 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:20:31.559906 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:20:31.559935 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:20:31.560996 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:20:31.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.561024 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:20:31.561675 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:20:31.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.564220 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:20:31.566491 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:20:31.566557 systemd[1]: Stopped network-cleanup.service. Feb 12 20:20:31.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.569722 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:20:31.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.569810 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:20:31.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.571760 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:20:31.571797 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:20:31.572937 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:20:31.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:31.572969 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:20:31.574093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:20:31.574125 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:20:31.575103 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:20:31.575132 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:20:31.576282 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:20:31.576311 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:20:31.578490 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:20:31.579359 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:20:31.579426 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:20:31.581159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:20:31.581194 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:20:31.581854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:20:31.581899 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:20:31.584212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:20:31.584659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:20:31.584728 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:20:31.585571 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:20:31.587382 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:20:31.601397 systemd[1]: Switching root. Feb 12 20:20:31.621373 systemd-journald[198]: Journal stopped Feb 12 20:20:34.252160 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 12 20:20:34.252211 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:20:34.252230 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:20:34.252243 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:20:34.252255 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:20:34.252268 kernel: SELinux: policy capability open_perms=1 Feb 12 20:20:34.252281 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:20:34.252293 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:20:34.252316 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:20:34.252329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:20:34.252342 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:20:34.252354 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:20:34.252367 systemd[1]: Successfully loaded SELinux policy in 35.368ms. Feb 12 20:20:34.252393 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.207ms. Feb 12 20:20:34.252408 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:20:34.252422 systemd[1]: Detected virtualization kvm. Feb 12 20:20:34.252436 systemd[1]: Detected architecture x86-64. Feb 12 20:20:34.252454 systemd[1]: Detected first boot. Feb 12 20:20:34.252468 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:20:34.252484 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:20:34.252497 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:20:34.252511 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:20:34.252528 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:20:34.252543 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:20:34.252562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:20:34.252575 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:20:34.252589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:20:34.252602 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:20:34.252618 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:20:34.252631 systemd[1]: Created slice system-getty.slice. Feb 12 20:20:34.252644 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:20:34.252658 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:20:34.252672 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:20:34.252691 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:20:34.252706 systemd[1]: Created slice user.slice. Feb 12 20:20:34.252719 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:20:34.252733 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:20:34.252746 systemd[1]: Set up automount boot.automount. Feb 12 20:20:34.252760 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:20:34.252773 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:20:34.252786 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:20:34.252806 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:20:34.252819 systemd[1]: Reached target integritysetup.target. Feb 12 20:20:34.252836 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:20:34.252849 systemd[1]: Reached target remote-fs.target. Feb 12 20:20:34.252862 systemd[1]: Reached target slices.target. Feb 12 20:20:34.252876 systemd[1]: Reached target swap.target. Feb 12 20:20:34.252899 systemd[1]: Reached target torcx.target. Feb 12 20:20:34.252913 systemd[1]: Reached target veritysetup.target. Feb 12 20:20:34.252927 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:20:34.252945 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:20:34.252958 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:20:34.252972 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:20:34.252986 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:20:34.253000 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:20:34.253015 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:20:34.253028 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:20:34.253041 systemd[1]: Mounting media.mount... Feb 12 20:20:34.253055 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:20:34.253074 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:20:34.253087 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:20:34.253101 systemd[1]: Mounting tmp.mount... Feb 12 20:20:34.253114 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:20:34.253128 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:20:34.253141 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:20:34.253170 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:20:34.253184 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:20:34.253197 systemd[1]: Starting modprobe@drm.service... Feb 12 20:20:34.253216 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:20:34.253230 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:20:34.253244 systemd[1]: Starting modprobe@loop.service... Feb 12 20:20:34.253258 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:20:34.253271 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:20:34.253284 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:20:34.253297 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:20:34.253310 kernel: loop: module loaded Feb 12 20:20:34.253324 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:20:34.253345 systemd[1]: Stopped systemd-journald.service. Feb 12 20:20:34.253359 systemd[1]: Starting systemd-journald.service... Feb 12 20:20:34.253371 kernel: fuse: init (API version 7.34) Feb 12 20:20:34.253384 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:20:34.253397 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:20:34.253410 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:20:34.253422 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:20:34.253436 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:20:34.253450 systemd[1]: Stopped verity-setup.service. Feb 12 20:20:34.253469 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:20:34.253482 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:20:34.253495 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:20:34.253508 systemd[1]: Mounted media.mount. Feb 12 20:20:34.253522 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:20:34.253535 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:20:34.253548 systemd[1]: Mounted tmp.mount. Feb 12 20:20:34.253564 systemd-journald[984]: Journal started Feb 12 20:20:34.253614 systemd-journald[984]: Runtime Journal (/run/log/journal/7095ac20479149de96f31ba9596e5fef) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:20:31.675000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:20:32.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:20:32.034000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:20:32.034000 audit: BPF prog-id=10 op=LOAD Feb 12 20:20:32.034000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:20:32.034000 audit: BPF prog-id=11 op=LOAD Feb 12 20:20:32.034000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:20:32.064000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:20:32.064000 audit[907]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:20:32.064000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:20:32.065000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:20:32.065000 audit[907]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9b5 a2=1ed a3=0 items=2 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:20:32.065000 audit: CWD cwd="/" Feb 12 20:20:32.065000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:32.065000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:32.065000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:20:34.144000 audit: BPF prog-id=12 op=LOAD Feb 12 20:20:34.144000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:20:34.145000 audit: BPF prog-id=13 op=LOAD Feb 12 20:20:34.145000 audit: BPF prog-id=14 op=LOAD Feb 12 20:20:34.145000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:20:34.145000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:20:34.145000 audit: BPF prog-id=15 op=LOAD Feb 12 20:20:34.145000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:20:34.146000 audit: BPF prog-id=16 op=LOAD Feb 12 20:20:34.146000 audit: BPF prog-id=17 op=LOAD Feb 12 20:20:34.146000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:20:34.146000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:20:34.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.160000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:20:34.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.228000 audit: BPF prog-id=18 op=LOAD Feb 12 20:20:34.229000 audit: BPF prog-id=19 op=LOAD Feb 12 20:20:34.229000 audit: BPF prog-id=20 op=LOAD Feb 12 20:20:34.229000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:20:34.229000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:20:34.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.250000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:20:34.250000 audit[984]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd2e4eda70 a2=4000 a3=7ffd2e4edb0c items=0 ppid=1 pid=984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:20:34.250000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:20:34.143754 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:20:32.063557 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:20:34.255414 systemd[1]: Started systemd-journald.service. Feb 12 20:20:34.143764 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:20:32.063709 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:20:34.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.146770 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:20:32.063724 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:20:32.063750 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:20:32.063758 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:20:32.063781 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:20:32.063792 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:20:32.063977 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:20:32.064006 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:20:32.064016 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:20:32.064252 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:20:32.064281 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:20:34.256520 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:20:32.064295 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:20:32.064307 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:20:32.064320 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:20:32.064332 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:20:33.896175 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:20:34.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:33.896643 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:20:33.896737 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:20:33.896882 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:20:33.896940 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:20:33.896990 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-12T20:20:33Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:20:34.257586 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:20:34.257757 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:20:34.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.258669 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:20:34.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.259439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:20:34.259615 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:20:34.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.260385 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:20:34.260552 systemd[1]: Finished modprobe@drm.service. Feb 12 20:20:34.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.261400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:20:34.261568 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:20:34.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.262365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:20:34.262534 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:20:34.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.263367 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:20:34.263532 systemd[1]: Finished modprobe@loop.service. Feb 12 20:20:34.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.264424 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:20:34.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.265295 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:20:34.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.266216 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:20:34.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.267247 systemd[1]: Reached target network-pre.target. Feb 12 20:20:34.268888 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:20:34.270441 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:20:34.272142 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:20:34.273455 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:20:34.274829 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:20:34.275467 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:20:34.276163 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:20:34.276739 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:20:34.277477 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:20:34.278816 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:20:34.283143 systemd-journald[984]: Time spent on flushing to /var/log/journal/7095ac20479149de96f31ba9596e5fef is 16.241ms for 1110 entries. Feb 12 20:20:34.283143 systemd-journald[984]: System Journal (/var/log/journal/7095ac20479149de96f31ba9596e5fef) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:20:34.313695 systemd-journald[984]: Received client request to flush runtime journal. Feb 12 20:20:34.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.280704 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:20:34.282838 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:20:34.314266 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:20:34.284766 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:20:34.285576 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:20:34.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.290456 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:20:34.292032 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:20:34.293365 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:20:34.301129 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:20:34.302640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:20:34.314442 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:20:34.320001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:20:34.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.747139 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:20:34.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.748000 audit: BPF prog-id=21 op=LOAD Feb 12 20:20:34.748000 audit: BPF prog-id=22 op=LOAD Feb 12 20:20:34.748000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:20:34.748000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:20:34.749030 systemd[1]: Starting systemd-udevd.service... Feb 12 20:20:34.763661 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Feb 12 20:20:34.775466 systemd[1]: Started systemd-udevd.service. Feb 12 20:20:34.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.777000 audit: BPF prog-id=23 op=LOAD Feb 12 20:20:34.777771 systemd[1]: Starting systemd-networkd.service... Feb 12 20:20:34.783000 audit: BPF prog-id=24 op=LOAD Feb 12 20:20:34.783000 audit: BPF prog-id=25 op=LOAD Feb 12 20:20:34.783000 audit: BPF prog-id=26 op=LOAD Feb 12 20:20:34.784574 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:20:34.809262 systemd[1]: Started systemd-userdbd.service. Feb 12 20:20:34.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.810059 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:20:34.831782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:20:34.843180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:20:34.848365 systemd-networkd[1023]: lo: Link UP Feb 12 20:20:34.848375 systemd-networkd[1023]: lo: Gained carrier Feb 12 20:20:34.848705 systemd-networkd[1023]: Enumeration completed Feb 12 20:20:34.848804 systemd[1]: Started systemd-networkd.service. Feb 12 20:20:34.848805 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:20:34.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.850191 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:20:34.850794 systemd-networkd[1023]: eth0: Link UP Feb 12 20:20:34.850801 systemd-networkd[1023]: eth0: Gained carrier Feb 12 20:20:34.864335 systemd-networkd[1023]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:20:34.868000 audit[1045]: AVC avc: denied { confidentiality } for pid=1045 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:20:34.868000 audit[1045]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d26c3263f0 a1=32194 a2=7fa81f66abc5 a3=5 items=108 ppid=1015 pid=1045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:20:34.868000 audit: CWD cwd="/" Feb 12 20:20:34.868000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=1 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=2 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=3 name=(null) inode=11182 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=4 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=5 name=(null) inode=11183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=6 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=7 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=8 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=9 name=(null) inode=11185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=10 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=11 name=(null) inode=11186 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=12 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=13 name=(null) inode=11187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=14 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=15 name=(null) inode=11188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=16 name=(null) inode=11184 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=17 name=(null) inode=11189 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=18 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=19 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=20 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=21 name=(null) inode=11191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=22 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=23 name=(null) inode=11192 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=24 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=25 name=(null) inode=11193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=26 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=27 name=(null) inode=11194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=28 name=(null) inode=11190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=29 name=(null) inode=11195 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=30 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=31 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=32 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=33 name=(null) inode=11197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=34 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=35 name=(null) inode=11198 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=36 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=37 name=(null) inode=11199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=38 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=39 name=(null) inode=11200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=40 name=(null) inode=11196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=41 name=(null) inode=11201 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=42 name=(null) inode=11181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=43 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=44 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=45 name=(null) inode=11203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=46 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=47 name=(null) inode=11204 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=48 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=49 name=(null) inode=11205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=50 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=51 name=(null) inode=11206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=52 name=(null) inode=11202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=53 name=(null) inode=11207 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=55 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=56 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=57 name=(null) inode=11209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=58 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=59 name=(null) inode=11210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=60 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=61 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=62 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=63 name=(null) inode=11212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=64 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=65 name=(null) inode=11213 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=66 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=67 name=(null) inode=11214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=68 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=69 name=(null) inode=11215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=70 name=(null) inode=11211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=71 name=(null) inode=11216 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=72 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=73 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=74 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=75 name=(null) inode=11218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=76 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=77 name=(null) inode=11219 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=78 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=79 name=(null) inode=11220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=80 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=81 name=(null) inode=11221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.891187 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:20:34.868000 audit: PATH item=82 name=(null) inode=11217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=83 name=(null) inode=11222 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=84 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=85 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=86 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=87 name=(null) inode=11224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=88 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=89 name=(null) inode=11225 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=90 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=91 name=(null) inode=11226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=92 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=93 name=(null) inode=11227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=94 name=(null) inode=11223 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=95 name=(null) inode=11228 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=96 name=(null) inode=11208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=97 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=98 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=99 name=(null) inode=11230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=100 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=101 name=(null) inode=11231 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=102 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=103 name=(null) inode=11232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=104 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=105 name=(null) inode=11233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=106 name=(null) inode=11229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PATH item=107 name=(null) inode=11234 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:20:34.868000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:20:34.906171 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:20:34.936181 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:20:34.946186 kernel: kvm: Nested Virtualization enabled Feb 12 20:20:34.946326 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:20:34.946373 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:20:34.946436 kernel: SVM: Virtual GIF supported Feb 12 20:20:34.964175 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:20:34.983500 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:20:34.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:34.985211 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:20:34.992083 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:20:35.014238 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:20:35.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.015047 systemd[1]: Reached target cryptsetup.target. Feb 12 20:20:35.016724 systemd[1]: Starting lvm2-activation.service... Feb 12 20:20:35.019852 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:20:35.044417 systemd[1]: Finished lvm2-activation.service. Feb 12 20:20:35.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.045218 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:20:35.045843 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:20:35.045874 systemd[1]: Reached target local-fs.target. Feb 12 20:20:35.046432 systemd[1]: Reached target machines.target. Feb 12 20:20:35.048163 systemd[1]: Starting ldconfig.service... Feb 12 20:20:35.048836 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:20:35.048900 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:20:35.049729 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:20:35.051052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:20:35.052641 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:20:35.053385 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:20:35.053427 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:20:35.054467 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:20:35.055452 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Feb 12 20:20:35.056309 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:20:35.061907 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:20:35.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.066302 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:20:35.066786 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:20:35.067894 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:20:35.110384 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) Feb 12 20:20:35.110384 systemd-fsck[1062]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:20:35.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.111984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:20:35.114615 systemd[1]: Mounting boot.mount... Feb 12 20:20:35.211523 systemd[1]: Mounted boot.mount. Feb 12 20:20:35.217496 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:20:35.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.224822 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:20:35.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.247346 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:20:35.273795 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:20:35.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.274771 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:20:35.276064 systemd[1]: Starting audit-rules.service... Feb 12 20:20:35.278073 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:20:35.279693 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:20:35.286000 audit: BPF prog-id=27 op=LOAD Feb 12 20:20:35.286955 systemd[1]: Starting systemd-resolved.service... Feb 12 20:20:35.289000 audit: BPF prog-id=28 op=LOAD Feb 12 20:20:35.289838 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:20:35.291513 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:20:35.292603 systemd[1]: Finished ldconfig.service. Feb 12 20:20:35.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.293432 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:20:35.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.294335 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:20:35.296000 audit[1081]: SYSTEM_BOOT pid=1081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.299735 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:20:35.307127 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:20:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:20:35.309462 systemd[1]: Starting systemd-update-done.service... Feb 12 20:20:35.315320 augenrules[1085]: No rules Feb 12 20:20:35.315000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:20:35.315000 audit[1085]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe8818f0e0 a2=420 a3=0 items=0 ppid=1065 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:20:35.315000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:20:35.316255 systemd[1]: Finished audit-rules.service. Feb 12 20:20:35.318001 systemd[1]: Finished systemd-update-done.service. Feb 12 20:20:35.336330 systemd-resolved[1075]: Positive Trust Anchors: Feb 12 20:20:35.336343 systemd-resolved[1075]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:20:35.336369 systemd-resolved[1075]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:20:35.338844 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:20:35.339940 systemd[1]: Reached target time-set.target. Feb 12 20:20:36.000772 systemd-timesyncd[1078]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:20:36.000829 systemd-timesyncd[1078]: Initial clock synchronization to Mon 2024-02-12 20:20:36.000665 UTC. Feb 12 20:20:36.004558 systemd-resolved[1075]: Defaulting to hostname 'linux'. Feb 12 20:20:36.005959 systemd[1]: Started systemd-resolved.service. Feb 12 20:20:36.006617 systemd[1]: Reached target network.target. Feb 12 20:20:36.007183 systemd[1]: Reached target nss-lookup.target. Feb 12 20:20:36.007779 systemd[1]: Reached target sysinit.target. Feb 12 20:20:36.008436 systemd[1]: Started motdgen.path. Feb 12 20:20:36.009018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:20:36.009948 systemd[1]: Started logrotate.timer. Feb 12 20:20:36.010563 systemd[1]: Started mdadm.timer. Feb 12 20:20:36.011064 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:20:36.011678 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:20:36.011701 systemd[1]: Reached target paths.target. Feb 12 20:20:36.012241 systemd[1]: Reached target timers.target. Feb 12 20:20:36.013078 systemd[1]: Listening on dbus.socket. Feb 12 20:20:36.014521 systemd[1]: Starting docker.socket... Feb 12 20:20:36.016881 systemd[1]: Listening on sshd.socket. Feb 12 20:20:36.017482 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:20:36.017810 systemd[1]: Listening on docker.socket. Feb 12 20:20:36.018419 systemd[1]: Reached target sockets.target. Feb 12 20:20:36.019109 systemd[1]: Reached target basic.target. Feb 12 20:20:36.019717 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:20:36.019740 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:20:36.020504 systemd[1]: Starting containerd.service... Feb 12 20:20:36.021805 systemd[1]: Starting dbus.service... Feb 12 20:20:36.023124 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:20:36.026083 systemd[1]: Starting extend-filesystems.service... Feb 12 20:20:36.026712 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:20:36.027703 systemd[1]: Starting motdgen.service... Feb 12 20:20:36.028943 jq[1096]: false Feb 12 20:20:36.030453 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:20:36.032001 systemd[1]: Starting prepare-critools.service... Feb 12 20:20:36.033528 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:20:36.035422 systemd[1]: Starting sshd-keygen.service... Feb 12 20:20:36.038475 systemd[1]: Starting systemd-logind.service... Feb 12 20:20:36.039810 extend-filesystems[1097]: Found sr0 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda1 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda2 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda3 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found usr Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda4 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda6 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda7 Feb 12 20:20:36.039810 extend-filesystems[1097]: Found vda9 Feb 12 20:20:36.039810 extend-filesystems[1097]: Checking size of /dev/vda9 Feb 12 20:20:36.039751 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:20:36.075260 extend-filesystems[1097]: Resized partition /dev/vda9 Feb 12 20:20:36.066820 dbus-daemon[1095]: [system] SELinux support is enabled Feb 12 20:20:36.039784 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:20:36.076168 extend-filesystems[1136]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:20:36.040686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:20:36.077109 jq[1118]: true Feb 12 20:20:36.041381 systemd[1]: Starting update-engine.service... Feb 12 20:20:36.046749 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:20:36.077655 tar[1120]: ./ Feb 12 20:20:36.077655 tar[1120]: ./loopback Feb 12 20:20:36.048600 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:20:36.084365 tar[1121]: crictl Feb 12 20:20:36.048757 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:20:36.084660 jq[1124]: true Feb 12 20:20:36.049007 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:20:36.084888 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:20:36.049125 systemd[1]: Finished motdgen.service. Feb 12 20:20:36.053134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:20:36.053581 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:20:36.067073 systemd[1]: Started dbus.service. Feb 12 20:20:36.069537 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:20:36.069573 systemd[1]: Reached target system-config.target. Feb 12 20:20:36.070339 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:20:36.070357 systemd[1]: Reached target user-config.target. Feb 12 20:20:36.099228 systemd-logind[1111]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:20:36.099418 systemd-logind[1111]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:20:36.099749 systemd-logind[1111]: New seat seat0. Feb 12 20:20:36.103989 systemd[1]: Started systemd-logind.service. Feb 12 20:20:36.109088 env[1125]: time="2024-02-12T20:20:36.108973501Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:20:36.110068 update_engine[1115]: I0212 20:20:36.109930 1115 main.cc:92] Flatcar Update Engine starting Feb 12 20:20:36.111871 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:20:36.116376 update_engine[1115]: I0212 20:20:36.116294 1115 update_check_scheduler.cc:74] Next update check in 6m1s Feb 12 20:20:36.116550 systemd[1]: Started update-engine.service. Feb 12 20:20:36.121150 systemd[1]: Started locksmithd.service. Feb 12 20:20:36.132963 env[1125]: time="2024-02-12T20:20:36.132620445Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:20:36.133587 extend-filesystems[1136]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:20:36.133587 extend-filesystems[1136]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:20:36.133587 extend-filesystems[1136]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:20:36.137224 extend-filesystems[1097]: Resized filesystem in /dev/vda9 Feb 12 20:20:36.138750 bash[1143]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:20:36.135670 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:20:36.135822 systemd[1]: Finished extend-filesystems.service. Feb 12 20:20:36.137605 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:20:36.146225 tar[1120]: ./bandwidth Feb 12 20:20:36.149314 env[1125]: time="2024-02-12T20:20:36.149277068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152571 env[1125]: time="2024-02-12T20:20:36.152519701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152571 env[1125]: time="2024-02-12T20:20:36.152558463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152862 env[1125]: time="2024-02-12T20:20:36.152754852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152862 env[1125]: time="2024-02-12T20:20:36.152770631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152862 env[1125]: time="2024-02-12T20:20:36.152781832Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:20:36.152862 env[1125]: time="2024-02-12T20:20:36.152791831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.152959 env[1125]: time="2024-02-12T20:20:36.152863766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.153095 env[1125]: time="2024-02-12T20:20:36.153055766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:20:36.153238 env[1125]: time="2024-02-12T20:20:36.153160623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:20:36.153238 env[1125]: time="2024-02-12T20:20:36.153174539Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:20:36.153238 env[1125]: time="2024-02-12T20:20:36.153214043Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:20:36.153238 env[1125]: time="2024-02-12T20:20:36.153228881Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.170762810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.170934872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.170947065Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.170983343Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.170996538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171013490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171072701Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171086246Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171098409Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171110442Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171121422Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171132834Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171249873Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:20:36.175877 env[1125]: time="2024-02-12T20:20:36.171315566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:20:36.173460 systemd[1]: Started containerd.service. Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171567960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171589600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171601112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171652368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171665543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171676604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171686121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171743659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171755131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171765100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171774287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171786119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171896386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171911574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176333 env[1125]: time="2024-02-12T20:20:36.171923016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176612 env[1125]: time="2024-02-12T20:20:36.171934407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:20:36.176612 env[1125]: time="2024-02-12T20:20:36.171946881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:20:36.176612 env[1125]: time="2024-02-12T20:20:36.171957821Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:20:36.176612 env[1125]: time="2024-02-12T20:20:36.171978530Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:20:36.176612 env[1125]: time="2024-02-12T20:20:36.172009458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172174958Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172220464Z" level=info msg="Connect containerd service" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172244168Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172770776Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172875202Z" level=info msg="Start subscribing containerd event" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172912342Z" level=info msg="Start recovering state" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172975660Z" level=info msg="Start event monitor" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.172994215Z" level=info msg="Start snapshots syncer" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.173001088Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.173007029Z" level=info msg="Start streaming server" Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.173239585Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.173305359Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:20:36.176707 env[1125]: time="2024-02-12T20:20:36.173357977Z" level=info msg="containerd successfully booted in 0.065494s" Feb 12 20:20:36.191423 tar[1120]: ./ptp Feb 12 20:20:36.199328 locksmithd[1154]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:20:36.230831 tar[1120]: ./vlan Feb 12 20:20:36.268464 tar[1120]: ./host-device Feb 12 20:20:36.304431 tar[1120]: ./tuning Feb 12 20:20:36.338880 tar[1120]: ./vrf Feb 12 20:20:36.371387 tar[1120]: ./sbr Feb 12 20:20:36.406624 tar[1120]: ./tap Feb 12 20:20:36.450034 tar[1120]: ./dhcp Feb 12 20:20:36.544500 systemd[1]: Finished prepare-critools.service. Feb 12 20:20:36.557716 tar[1120]: ./static Feb 12 20:20:36.583200 tar[1120]: ./firewall Feb 12 20:20:36.616950 tar[1120]: ./macvlan Feb 12 20:20:36.648650 tar[1120]: ./dummy Feb 12 20:20:36.677511 tar[1120]: ./bridge Feb 12 20:20:36.711371 tar[1120]: ./ipvlan Feb 12 20:20:36.743097 tar[1120]: ./portmap Feb 12 20:20:36.772809 tar[1120]: ./host-local Feb 12 20:20:36.778993 systemd-networkd[1023]: eth0: Gained IPv6LL Feb 12 20:20:36.807707 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:20:37.550016 systemd[1]: Created slice system-sshd.slice. Feb 12 20:20:38.260206 sshd_keygen[1117]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:20:38.279506 systemd[1]: Finished sshd-keygen.service. Feb 12 20:20:38.281834 systemd[1]: Starting issuegen.service... Feb 12 20:20:38.283161 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:39706.service. Feb 12 20:20:38.287389 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:20:38.287581 systemd[1]: Finished issuegen.service. Feb 12 20:20:38.289740 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:20:38.295302 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:20:38.297610 systemd[1]: Started getty@tty1.service. Feb 12 20:20:38.299193 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:20:38.300139 systemd[1]: Reached target getty.target. Feb 12 20:20:38.300822 systemd[1]: Reached target multi-user.target. Feb 12 20:20:38.302871 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:20:38.309243 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:20:38.309374 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:20:38.310123 systemd[1]: Startup finished in 535ms (kernel) + 4.971s (initrd) + 6.010s (userspace) = 11.518s. Feb 12 20:20:38.320437 sshd[1174]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:38.321910 sshd[1174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.331023 systemd-logind[1111]: New session 1 of user core. Feb 12 20:20:38.332252 systemd[1]: Created slice user-500.slice. Feb 12 20:20:38.333523 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:20:38.341379 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:20:38.342976 systemd[1]: Starting user@500.service... Feb 12 20:20:38.345148 (systemd)[1183]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.421108 systemd[1183]: Queued start job for default target default.target. Feb 12 20:20:38.421563 systemd[1183]: Reached target paths.target. Feb 12 20:20:38.421581 systemd[1183]: Reached target sockets.target. Feb 12 20:20:38.421594 systemd[1183]: Reached target timers.target. Feb 12 20:20:38.421604 systemd[1183]: Reached target basic.target. Feb 12 20:20:38.421636 systemd[1183]: Reached target default.target. Feb 12 20:20:38.421657 systemd[1183]: Startup finished in 71ms. Feb 12 20:20:38.421704 systemd[1]: Started user@500.service. Feb 12 20:20:38.422554 systemd[1]: Started session-1.scope. Feb 12 20:20:38.472180 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:39712.service. Feb 12 20:20:38.505219 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 39712 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:38.506054 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.509243 systemd-logind[1111]: New session 2 of user core. Feb 12 20:20:38.509922 systemd[1]: Started session-2.scope. Feb 12 20:20:38.559643 sshd[1192]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:38.561649 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:39712.service: Deactivated successfully. Feb 12 20:20:38.562076 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:20:38.562445 systemd-logind[1111]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:20:38.563318 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:39718.service. Feb 12 20:20:38.563920 systemd-logind[1111]: Removed session 2. Feb 12 20:20:38.593356 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 39718 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:38.594212 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.596870 systemd-logind[1111]: New session 3 of user core. Feb 12 20:20:38.597588 systemd[1]: Started session-3.scope. Feb 12 20:20:38.644946 sshd[1198]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:38.647266 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:39718.service: Deactivated successfully. Feb 12 20:20:38.647804 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:20:38.648272 systemd-logind[1111]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:20:38.649185 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:39726.service. Feb 12 20:20:38.649903 systemd-logind[1111]: Removed session 3. Feb 12 20:20:38.678199 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 39726 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:38.678941 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.681706 systemd-logind[1111]: New session 4 of user core. Feb 12 20:20:38.682567 systemd[1]: Started session-4.scope. Feb 12 20:20:38.733010 sshd[1204]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:38.735152 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:39726.service: Deactivated successfully. Feb 12 20:20:38.735603 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:20:38.736040 systemd-logind[1111]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:20:38.736977 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:39736.service. Feb 12 20:20:38.737565 systemd-logind[1111]: Removed session 4. Feb 12 20:20:38.767334 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 39736 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:38.768330 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:38.771345 systemd-logind[1111]: New session 5 of user core. Feb 12 20:20:38.772142 systemd[1]: Started session-5.scope. Feb 12 20:20:38.825548 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:20:38.825733 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:20:39.325383 systemd[1]: Reloading. Feb 12 20:20:39.380734 /usr/lib/systemd/system-generators/torcx-generator[1243]: time="2024-02-12T20:20:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:20:39.380760 /usr/lib/systemd/system-generators/torcx-generator[1243]: time="2024-02-12T20:20:39Z" level=info msg="torcx already run" Feb 12 20:20:39.433306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:20:39.433321 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:20:39.451729 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:20:39.514485 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:20:39.532246 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:20:39.532729 systemd[1]: Reached target network-online.target. Feb 12 20:20:39.534133 systemd[1]: Started kubelet.service. Feb 12 20:20:39.543061 systemd[1]: Starting coreos-metadata.service... Feb 12 20:20:39.549239 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 20:20:39.549379 systemd[1]: Finished coreos-metadata.service. Feb 12 20:20:39.579873 kubelet[1284]: E0212 20:20:39.579705 1284 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:20:39.581509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:20:39.581627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:20:39.722041 systemd[1]: Stopped kubelet.service. Feb 12 20:20:39.732913 systemd[1]: Reloading. Feb 12 20:20:39.786685 /usr/lib/systemd/system-generators/torcx-generator[1352]: time="2024-02-12T20:20:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:20:39.786710 /usr/lib/systemd/system-generators/torcx-generator[1352]: time="2024-02-12T20:20:39Z" level=info msg="torcx already run" Feb 12 20:20:39.846593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:20:39.846608 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:20:39.865339 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:20:39.935232 systemd[1]: Started kubelet.service. Feb 12 20:20:39.973337 kubelet[1393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:39.973337 kubelet[1393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:20:39.973337 kubelet[1393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:39.973690 kubelet[1393]: I0212 20:20:39.973401 1393 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:20:40.145793 kubelet[1393]: I0212 20:20:40.145710 1393 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:20:40.145793 kubelet[1393]: I0212 20:20:40.145741 1393 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:20:40.146029 kubelet[1393]: I0212 20:20:40.145999 1393 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:20:40.147485 kubelet[1393]: I0212 20:20:40.147455 1393 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:20:40.150680 kubelet[1393]: I0212 20:20:40.150661 1393 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:20:40.150824 kubelet[1393]: I0212 20:20:40.150807 1393 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:20:40.150887 kubelet[1393]: I0212 20:20:40.150875 1393 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:20:40.150972 kubelet[1393]: I0212 20:20:40.150894 1393 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:20:40.150972 kubelet[1393]: I0212 20:20:40.150904 1393 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:20:40.150972 kubelet[1393]: I0212 20:20:40.150968 1393 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:40.157741 kubelet[1393]: I0212 20:20:40.157718 1393 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:20:40.157808 kubelet[1393]: I0212 20:20:40.157747 1393 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:20:40.157808 kubelet[1393]: I0212 20:20:40.157770 1393 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:20:40.157808 kubelet[1393]: I0212 20:20:40.157786 1393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:20:40.157916 kubelet[1393]: E0212 20:20:40.157880 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:40.157966 kubelet[1393]: E0212 20:20:40.157953 1393 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:40.158643 kubelet[1393]: I0212 20:20:40.158608 1393 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:20:40.158955 kubelet[1393]: W0212 20:20:40.158933 1393 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:20:40.159399 kubelet[1393]: I0212 20:20:40.159370 1393 server.go:1168] "Started kubelet" Feb 12 20:20:40.159516 kubelet[1393]: I0212 20:20:40.159492 1393 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:20:40.159620 kubelet[1393]: I0212 20:20:40.159597 1393 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:20:40.160135 kubelet[1393]: E0212 20:20:40.160111 1393 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:20:40.160187 kubelet[1393]: E0212 20:20:40.160147 1393 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:20:40.160829 kubelet[1393]: I0212 20:20:40.160805 1393 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:20:40.162320 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:20:40.162482 kubelet[1393]: I0212 20:20:40.162468 1393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:20:40.163372 kubelet[1393]: I0212 20:20:40.163340 1393 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:20:40.163435 kubelet[1393]: E0212 20:20:40.162685 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b3371366096801", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 159348737, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 159348737, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.163621 kubelet[1393]: I0212 20:20:40.163595 1393 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:20:40.163621 kubelet[1393]: W0212 20:20:40.162973 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:20:40.163701 kubelet[1393]: E0212 20:20:40.163668 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:40.163701 kubelet[1393]: W0212 20:20:40.163001 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:20:40.163701 kubelet[1393]: E0212 20:20:40.163671 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:20:40.163701 kubelet[1393]: E0212 20:20:40.163686 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:20:40.165110 kubelet[1393]: E0212 20:20:40.165047 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713661562a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 160133799, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 160133799, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.165205 kubelet[1393]: E0212 20:20:40.165170 1393 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 20:20:40.165252 kubelet[1393]: W0212 20:20:40.165229 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:20:40.165252 kubelet[1393]: E0212 20:20:40.165250 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:20:40.180077 kubelet[1393]: I0212 20:20:40.180046 1393 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:20:40.180077 kubelet[1393]: I0212 20:20:40.180081 1393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:20:40.180178 kubelet[1393]: I0212 20:20:40.180094 1393 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:40.180356 kubelet[1393]: E0212 20:20:40.180277 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673cfff4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179507188, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179507188, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.182766 kubelet[1393]: E0212 20:20:40.182695 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d0fa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179511206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179511206, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.183468 kubelet[1393]: E0212 20:20:40.183410 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d170b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179513099, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179513099, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.187333 kubelet[1393]: I0212 20:20:40.187311 1393 policy_none.go:49] "None policy: Start" Feb 12 20:20:40.187736 kubelet[1393]: I0212 20:20:40.187722 1393 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:20:40.187736 kubelet[1393]: I0212 20:20:40.187739 1393 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:20:40.192248 systemd[1]: Created slice kubepods.slice. Feb 12 20:20:40.195628 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:20:40.197947 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:20:40.205525 kubelet[1393]: I0212 20:20:40.205481 1393 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:20:40.205977 kubelet[1393]: I0212 20:20:40.205953 1393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:20:40.206525 kubelet[1393]: E0212 20:20:40.206509 1393 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.38\" not found" Feb 12 20:20:40.207857 kubelet[1393]: E0212 20:20:40.207781 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b3371368db4be8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 206658536, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 206658536, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.230239 kubelet[1393]: I0212 20:20:40.230215 1393 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:20:40.231185 kubelet[1393]: I0212 20:20:40.231150 1393 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:20:40.231185 kubelet[1393]: I0212 20:20:40.231183 1393 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:20:40.231333 kubelet[1393]: I0212 20:20:40.231205 1393 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:20:40.231333 kubelet[1393]: E0212 20:20:40.231283 1393 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:20:40.232531 kubelet[1393]: W0212 20:20:40.232503 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:20:40.232586 kubelet[1393]: E0212 20:20:40.232537 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:20:40.264418 kubelet[1393]: I0212 20:20:40.264384 1393 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.38" Feb 12 20:20:40.265416 kubelet[1393]: E0212 20:20:40.265342 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673cfff4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179507188, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 264336794, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673cfff4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.265416 kubelet[1393]: E0212 20:20:40.265416 1393 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.38" Feb 12 20:20:40.265977 kubelet[1393]: E0212 20:20:40.265939 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d0fa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179511206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 264349628, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d0fa6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.266516 kubelet[1393]: E0212 20:20:40.266476 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d170b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179513099, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 264353295, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d170b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.367112 kubelet[1393]: E0212 20:20:40.367076 1393 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 20:20:40.467012 kubelet[1393]: I0212 20:20:40.466895 1393 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.38" Feb 12 20:20:40.468191 kubelet[1393]: E0212 20:20:40.468175 1393 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.38" Feb 12 20:20:40.468328 kubelet[1393]: E0212 20:20:40.468170 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673cfff4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179507188, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 466836907, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673cfff4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.469457 kubelet[1393]: E0212 20:20:40.469352 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d0fa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179511206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 466861322, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d0fa6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.470273 kubelet[1393]: E0212 20:20:40.470211 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d170b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179513099, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 466867454, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d170b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.768141 kubelet[1393]: E0212 20:20:40.768032 1393 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.38\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 20:20:40.869198 kubelet[1393]: I0212 20:20:40.869168 1393 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.38" Feb 12 20:20:40.870319 kubelet[1393]: E0212 20:20:40.870299 1393 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.38" Feb 12 20:20:40.870388 kubelet[1393]: E0212 20:20:40.870294 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673cfff4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.38 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179507188, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 869102732, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673cfff4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.871002 kubelet[1393]: E0212 20:20:40.870952 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d0fa6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.38 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179511206, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 869111579, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d0fa6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:40.871658 kubelet[1393]: E0212 20:20:40.871609 1393 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38.17b33713673d170b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.38", UID:"10.0.0.38", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.38 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.38"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 179513099, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 40, 869117430, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.38.17b33713673d170b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:20:41.029140 kubelet[1393]: W0212 20:20:41.029031 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:20:41.029140 kubelet[1393]: E0212 20:20:41.029068 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:20:41.074368 kubelet[1393]: W0212 20:20:41.074324 1393 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:20:41.074368 kubelet[1393]: E0212 20:20:41.074356 1393 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:20:41.147585 kubelet[1393]: I0212 20:20:41.147518 1393 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:20:41.158773 kubelet[1393]: E0212 20:20:41.158746 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:41.513314 kubelet[1393]: E0212 20:20:41.513210 1393 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.38" not found Feb 12 20:20:41.570950 kubelet[1393]: E0212 20:20:41.570918 1393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.38\" not found" node="10.0.0.38" Feb 12 20:20:41.671958 kubelet[1393]: I0212 20:20:41.671916 1393 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.38" Feb 12 20:20:41.675696 kubelet[1393]: I0212 20:20:41.675674 1393 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.38" Feb 12 20:20:41.686458 kubelet[1393]: E0212 20:20:41.686416 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:41.786635 kubelet[1393]: E0212 20:20:41.786507 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:41.887095 kubelet[1393]: E0212 20:20:41.887045 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:41.949055 sudo[1213]: pam_unix(sudo:session): session closed for user root Feb 12 20:20:41.950384 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:41.952506 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:39736.service: Deactivated successfully. Feb 12 20:20:41.953140 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:20:41.953578 systemd-logind[1111]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:20:41.954177 systemd-logind[1111]: Removed session 5. Feb 12 20:20:41.988072 kubelet[1393]: E0212 20:20:41.987943 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.088647 kubelet[1393]: E0212 20:20:42.088535 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.159038 kubelet[1393]: E0212 20:20:42.158992 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:42.189419 kubelet[1393]: E0212 20:20:42.189382 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.290487 kubelet[1393]: E0212 20:20:42.290442 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.391175 kubelet[1393]: E0212 20:20:42.391050 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.491758 kubelet[1393]: E0212 20:20:42.491707 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.592272 kubelet[1393]: E0212 20:20:42.592226 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.693015 kubelet[1393]: E0212 20:20:42.692909 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.793626 kubelet[1393]: E0212 20:20:42.793566 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.894147 kubelet[1393]: E0212 20:20:42.894092 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:42.994818 kubelet[1393]: E0212 20:20:42.994696 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:43.095235 kubelet[1393]: E0212 20:20:43.095196 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:43.159599 kubelet[1393]: E0212 20:20:43.159561 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:43.196253 kubelet[1393]: E0212 20:20:43.196219 1393 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.38\" not found" Feb 12 20:20:43.297319 kubelet[1393]: I0212 20:20:43.297218 1393 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:20:43.297515 env[1125]: time="2024-02-12T20:20:43.297475365Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:20:43.297775 kubelet[1393]: I0212 20:20:43.297641 1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:20:44.160338 kubelet[1393]: I0212 20:20:44.160262 1393 apiserver.go:52] "Watching apiserver" Feb 12 20:20:44.160338 kubelet[1393]: E0212 20:20:44.160298 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:44.162578 kubelet[1393]: I0212 20:20:44.162555 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:44.162650 kubelet[1393]: I0212 20:20:44.162640 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:44.164298 kubelet[1393]: I0212 20:20:44.164286 1393 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:20:44.167689 systemd[1]: Created slice kubepods-besteffort-pod76cb70db_27d7_4f93_885e_0566e46983cb.slice. Feb 12 20:20:44.175650 systemd[1]: Created slice kubepods-burstable-podee70eb59_f12d_4c79_a4e9_57b93d767abf.slice. Feb 12 20:20:44.185375 kubelet[1393]: I0212 20:20:44.185360 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-etc-cni-netd\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185440 kubelet[1393]: I0212 20:20:44.185388 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-xtables-lock\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185440 kubelet[1393]: I0212 20:20:44.185406 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee70eb59-f12d-4c79-a4e9-57b93d767abf-clustermesh-secrets\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185483 kubelet[1393]: I0212 20:20:44.185474 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76cb70db-27d7-4f93-885e-0566e46983cb-lib-modules\") pod \"kube-proxy-t2phj\" (UID: \"76cb70db-27d7-4f93-885e-0566e46983cb\") " pod="kube-system/kube-proxy-t2phj" Feb 12 20:20:44.185516 kubelet[1393]: I0212 20:20:44.185492 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-run\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185557 kubelet[1393]: I0212 20:20:44.185532 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cni-path\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185581 kubelet[1393]: I0212 20:20:44.185562 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-config-path\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185602 kubelet[1393]: I0212 20:20:44.185583 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76cb70db-27d7-4f93-885e-0566e46983cb-xtables-lock\") pod \"kube-proxy-t2phj\" (UID: \"76cb70db-27d7-4f93-885e-0566e46983cb\") " pod="kube-system/kube-proxy-t2phj" Feb 12 20:20:44.185680 kubelet[1393]: I0212 20:20:44.185654 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnlq\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-kube-api-access-dpnlq\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185740 kubelet[1393]: I0212 20:20:44.185705 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76cb70db-27d7-4f93-885e-0566e46983cb-kube-proxy\") pod \"kube-proxy-t2phj\" (UID: \"76cb70db-27d7-4f93-885e-0566e46983cb\") " pod="kube-system/kube-proxy-t2phj" Feb 12 20:20:44.185740 kubelet[1393]: I0212 20:20:44.185725 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-bpf-maps\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185783 kubelet[1393]: I0212 20:20:44.185762 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-lib-modules\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185805 kubelet[1393]: I0212 20:20:44.185788 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-net\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185825 kubelet[1393]: I0212 20:20:44.185812 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-kernel\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185885 kubelet[1393]: I0212 20:20:44.185871 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hostproc\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185911 kubelet[1393]: I0212 20:20:44.185896 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-cgroup\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185934 kubelet[1393]: I0212 20:20:44.185915 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hubble-tls\") pod \"cilium-cjhmw\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " pod="kube-system/cilium-cjhmw" Feb 12 20:20:44.185957 kubelet[1393]: I0212 20:20:44.185937 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfwr\" (UniqueName: \"kubernetes.io/projected/76cb70db-27d7-4f93-885e-0566e46983cb-kube-api-access-szfwr\") pod \"kube-proxy-t2phj\" (UID: \"76cb70db-27d7-4f93-885e-0566e46983cb\") " pod="kube-system/kube-proxy-t2phj" Feb 12 20:20:44.185980 kubelet[1393]: I0212 20:20:44.185973 1393 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:20:44.475804 kubelet[1393]: E0212 20:20:44.475165 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:44.476048 env[1125]: time="2024-02-12T20:20:44.476010771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2phj,Uid:76cb70db-27d7-4f93-885e-0566e46983cb,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:44.486104 kubelet[1393]: E0212 20:20:44.486084 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:44.486548 env[1125]: time="2024-02-12T20:20:44.486510676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cjhmw,Uid:ee70eb59-f12d-4c79-a4e9-57b93d767abf,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:45.161358 kubelet[1393]: E0212 20:20:45.161280 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:45.402774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799418905.mount: Deactivated successfully. Feb 12 20:20:45.409957 env[1125]: time="2024-02-12T20:20:45.409909538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.411573 env[1125]: time="2024-02-12T20:20:45.411498859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.412385 env[1125]: time="2024-02-12T20:20:45.412344956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.413954 env[1125]: time="2024-02-12T20:20:45.413906605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.416323 env[1125]: time="2024-02-12T20:20:45.416285467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.417903 env[1125]: time="2024-02-12T20:20:45.417871603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.420234 env[1125]: time="2024-02-12T20:20:45.420209788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.421496 env[1125]: time="2024-02-12T20:20:45.421465834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:45.438709 env[1125]: time="2024-02-12T20:20:45.438657090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:45.438912 env[1125]: time="2024-02-12T20:20:45.438879958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:45.439000 env[1125]: time="2024-02-12T20:20:45.438977611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:45.439442 env[1125]: time="2024-02-12T20:20:45.439407167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d pid=1449 runtime=io.containerd.runc.v2 Feb 12 20:20:45.444411 env[1125]: time="2024-02-12T20:20:45.444352213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:45.444411 env[1125]: time="2024-02-12T20:20:45.444386597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:45.444563 env[1125]: time="2024-02-12T20:20:45.444395534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:45.444758 env[1125]: time="2024-02-12T20:20:45.444731284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3356b82a0c0e524de85455ead25576bd0e88266029430de9a15ac8a852575ed pid=1467 runtime=io.containerd.runc.v2 Feb 12 20:20:45.456610 systemd[1]: Started cri-containerd-b3356b82a0c0e524de85455ead25576bd0e88266029430de9a15ac8a852575ed.scope. Feb 12 20:20:45.463094 systemd[1]: Started cri-containerd-8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d.scope. Feb 12 20:20:45.487956 env[1125]: time="2024-02-12T20:20:45.487676527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2phj,Uid:76cb70db-27d7-4f93-885e-0566e46983cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3356b82a0c0e524de85455ead25576bd0e88266029430de9a15ac8a852575ed\"" Feb 12 20:20:45.488736 kubelet[1393]: E0212 20:20:45.488707 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:45.492639 env[1125]: time="2024-02-12T20:20:45.492596104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 20:20:45.499209 env[1125]: time="2024-02-12T20:20:45.499171758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cjhmw,Uid:ee70eb59-f12d-4c79-a4e9-57b93d767abf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\"" Feb 12 20:20:45.499882 kubelet[1393]: E0212 20:20:45.499861 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:46.161945 kubelet[1393]: E0212 20:20:46.161823 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:46.538060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359639517.mount: Deactivated successfully. Feb 12 20:20:47.075647 env[1125]: time="2024-02-12T20:20:47.075610517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:47.077242 env[1125]: time="2024-02-12T20:20:47.077209206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:47.078466 env[1125]: time="2024-02-12T20:20:47.078442419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:47.079608 env[1125]: time="2024-02-12T20:20:47.079584220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:47.080045 env[1125]: time="2024-02-12T20:20:47.080019006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 20:20:47.080676 env[1125]: time="2024-02-12T20:20:47.080541907Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:20:47.081632 env[1125]: time="2024-02-12T20:20:47.081604330Z" level=info msg="CreateContainer within sandbox \"b3356b82a0c0e524de85455ead25576bd0e88266029430de9a15ac8a852575ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:20:47.094518 env[1125]: time="2024-02-12T20:20:47.094478598Z" level=info msg="CreateContainer within sandbox \"b3356b82a0c0e524de85455ead25576bd0e88266029430de9a15ac8a852575ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d463f9331470049230a55b30592ebf132b2c0106bcdec551a826922c5ac1073e\"" Feb 12 20:20:47.095050 env[1125]: time="2024-02-12T20:20:47.095031736Z" level=info msg="StartContainer for \"d463f9331470049230a55b30592ebf132b2c0106bcdec551a826922c5ac1073e\"" Feb 12 20:20:47.115022 systemd[1]: Started cri-containerd-d463f9331470049230a55b30592ebf132b2c0106bcdec551a826922c5ac1073e.scope. Feb 12 20:20:47.150282 env[1125]: time="2024-02-12T20:20:47.150226004Z" level=info msg="StartContainer for \"d463f9331470049230a55b30592ebf132b2c0106bcdec551a826922c5ac1073e\" returns successfully" Feb 12 20:20:47.162729 kubelet[1393]: E0212 20:20:47.162689 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:47.245677 kubelet[1393]: E0212 20:20:47.245651 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:47.254310 kubelet[1393]: I0212 20:20:47.254280 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t2phj" podStartSLOduration=4.66552563 podCreationTimestamp="2024-02-12 20:20:41 +0000 UTC" firstStartedPulling="2024-02-12 20:20:45.491627197 +0000 UTC m=+5.552480237" lastFinishedPulling="2024-02-12 20:20:47.080334468 +0000 UTC m=+7.141187498" observedRunningTime="2024-02-12 20:20:47.253504024 +0000 UTC m=+7.314357054" watchObservedRunningTime="2024-02-12 20:20:47.254232891 +0000 UTC m=+7.315085922" Feb 12 20:20:48.163112 kubelet[1393]: E0212 20:20:48.163078 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:48.253108 kubelet[1393]: E0212 20:20:48.253077 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:49.163908 kubelet[1393]: E0212 20:20:49.163875 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:50.165017 kubelet[1393]: E0212 20:20:50.164973 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:51.165986 kubelet[1393]: E0212 20:20:51.165945 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:51.983343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019056622.mount: Deactivated successfully. Feb 12 20:20:52.166163 kubelet[1393]: E0212 20:20:52.166090 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:53.167118 kubelet[1393]: E0212 20:20:53.167053 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:54.167237 kubelet[1393]: E0212 20:20:54.167184 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:55.168397 kubelet[1393]: E0212 20:20:55.168341 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:56.168739 kubelet[1393]: E0212 20:20:56.168709 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:56.198321 env[1125]: time="2024-02-12T20:20:56.198259689Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:56.200067 env[1125]: time="2024-02-12T20:20:56.200028287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:56.201584 env[1125]: time="2024-02-12T20:20:56.201544611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:56.202287 env[1125]: time="2024-02-12T20:20:56.202261105Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:20:56.203677 env[1125]: time="2024-02-12T20:20:56.203638488Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:20:56.215810 env[1125]: time="2024-02-12T20:20:56.215771466Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\"" Feb 12 20:20:56.216203 env[1125]: time="2024-02-12T20:20:56.216173741Z" level=info msg="StartContainer for \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\"" Feb 12 20:20:56.230835 systemd[1]: Started cri-containerd-dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557.scope. Feb 12 20:20:56.253728 env[1125]: time="2024-02-12T20:20:56.253666106Z" level=info msg="StartContainer for \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\" returns successfully" Feb 12 20:20:56.257554 systemd[1]: cri-containerd-dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557.scope: Deactivated successfully. Feb 12 20:20:56.265436 kubelet[1393]: E0212 20:20:56.265406 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:56.841772 env[1125]: time="2024-02-12T20:20:56.841721029Z" level=info msg="shim disconnected" id=dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557 Feb 12 20:20:56.841772 env[1125]: time="2024-02-12T20:20:56.841769890Z" level=warning msg="cleaning up after shim disconnected" id=dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557 namespace=k8s.io Feb 12 20:20:56.841772 env[1125]: time="2024-02-12T20:20:56.841778467Z" level=info msg="cleaning up dead shim" Feb 12 20:20:56.847605 env[1125]: time="2024-02-12T20:20:56.847577524Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1733 runtime=io.containerd.runc.v2\n" Feb 12 20:20:57.169092 kubelet[1393]: E0212 20:20:57.168958 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:57.210264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557-rootfs.mount: Deactivated successfully. Feb 12 20:20:57.267782 kubelet[1393]: E0212 20:20:57.267749 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:57.269253 env[1125]: time="2024-02-12T20:20:57.269206640Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:20:57.333600 env[1125]: time="2024-02-12T20:20:57.333551051Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\"" Feb 12 20:20:57.333945 env[1125]: time="2024-02-12T20:20:57.333917899Z" level=info msg="StartContainer for \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\"" Feb 12 20:20:57.348742 systemd[1]: Started cri-containerd-80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13.scope. Feb 12 20:20:57.376517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:20:57.376707 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:20:57.377074 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:20:57.378374 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:20:57.379377 systemd[1]: cri-containerd-80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13.scope: Deactivated successfully. Feb 12 20:20:57.384376 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:20:57.457518 env[1125]: time="2024-02-12T20:20:57.457477622Z" level=info msg="StartContainer for \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\" returns successfully" Feb 12 20:20:57.476637 env[1125]: time="2024-02-12T20:20:57.476593348Z" level=info msg="shim disconnected" id=80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13 Feb 12 20:20:57.476834 env[1125]: time="2024-02-12T20:20:57.476793984Z" level=warning msg="cleaning up after shim disconnected" id=80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13 namespace=k8s.io Feb 12 20:20:57.476834 env[1125]: time="2024-02-12T20:20:57.476813861Z" level=info msg="cleaning up dead shim" Feb 12 20:20:57.482583 env[1125]: time="2024-02-12T20:20:57.482544931Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1796 runtime=io.containerd.runc.v2\n" Feb 12 20:20:58.170013 kubelet[1393]: E0212 20:20:58.169983 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:58.210350 systemd[1]: run-containerd-runc-k8s.io-80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13-runc.ax685q.mount: Deactivated successfully. Feb 12 20:20:58.210455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13-rootfs.mount: Deactivated successfully. Feb 12 20:20:58.270407 kubelet[1393]: E0212 20:20:58.270390 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:58.272016 env[1125]: time="2024-02-12T20:20:58.271972161Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:20:58.287314 env[1125]: time="2024-02-12T20:20:58.287277620Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\"" Feb 12 20:20:58.287768 env[1125]: time="2024-02-12T20:20:58.287731822Z" level=info msg="StartContainer for \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\"" Feb 12 20:20:58.300895 systemd[1]: Started cri-containerd-96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165.scope. Feb 12 20:20:58.322182 env[1125]: time="2024-02-12T20:20:58.322136946Z" level=info msg="StartContainer for \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\" returns successfully" Feb 12 20:20:58.323047 systemd[1]: cri-containerd-96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165.scope: Deactivated successfully. Feb 12 20:20:58.341690 env[1125]: time="2024-02-12T20:20:58.341649486Z" level=info msg="shim disconnected" id=96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165 Feb 12 20:20:58.341690 env[1125]: time="2024-02-12T20:20:58.341687387Z" level=warning msg="cleaning up after shim disconnected" id=96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165 namespace=k8s.io Feb 12 20:20:58.341690 env[1125]: time="2024-02-12T20:20:58.341696203Z" level=info msg="cleaning up dead shim" Feb 12 20:20:58.347176 env[1125]: time="2024-02-12T20:20:58.347152267Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1851 runtime=io.containerd.runc.v2\n" Feb 12 20:20:59.170895 kubelet[1393]: E0212 20:20:59.170825 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:20:59.210439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165-rootfs.mount: Deactivated successfully. Feb 12 20:20:59.273708 kubelet[1393]: E0212 20:20:59.273680 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:59.275397 env[1125]: time="2024-02-12T20:20:59.275358888Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:20:59.289560 env[1125]: time="2024-02-12T20:20:59.289509420Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\"" Feb 12 20:20:59.290072 env[1125]: time="2024-02-12T20:20:59.290044885Z" level=info msg="StartContainer for \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\"" Feb 12 20:20:59.304990 systemd[1]: Started cri-containerd-1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0.scope. Feb 12 20:20:59.324387 systemd[1]: cri-containerd-1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0.scope: Deactivated successfully. Feb 12 20:20:59.325674 env[1125]: time="2024-02-12T20:20:59.325462588Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee70eb59_f12d_4c79_a4e9_57b93d767abf.slice/cri-containerd-1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0.scope/memory.events\": no such file or directory" Feb 12 20:20:59.328187 env[1125]: time="2024-02-12T20:20:59.328139790Z" level=info msg="StartContainer for \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\" returns successfully" Feb 12 20:20:59.345959 env[1125]: time="2024-02-12T20:20:59.345896677Z" level=info msg="shim disconnected" id=1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0 Feb 12 20:20:59.345959 env[1125]: time="2024-02-12T20:20:59.345950457Z" level=warning msg="cleaning up after shim disconnected" id=1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0 namespace=k8s.io Feb 12 20:20:59.345959 env[1125]: time="2024-02-12T20:20:59.345959595Z" level=info msg="cleaning up dead shim" Feb 12 20:20:59.352370 env[1125]: time="2024-02-12T20:20:59.352336806Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1907 runtime=io.containerd.runc.v2\n" Feb 12 20:21:00.158587 kubelet[1393]: E0212 20:21:00.158529 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:00.171590 kubelet[1393]: E0212 20:21:00.171570 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:00.210481 systemd[1]: run-containerd-runc-k8s.io-1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0-runc.9Hvulx.mount: Deactivated successfully. Feb 12 20:21:00.210584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0-rootfs.mount: Deactivated successfully. Feb 12 20:21:00.277040 kubelet[1393]: E0212 20:21:00.277017 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:00.279158 env[1125]: time="2024-02-12T20:21:00.279100119Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:21:00.292065 env[1125]: time="2024-02-12T20:21:00.292025634Z" level=info msg="CreateContainer within sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\"" Feb 12 20:21:00.292478 env[1125]: time="2024-02-12T20:21:00.292446033Z" level=info msg="StartContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\"" Feb 12 20:21:00.308536 systemd[1]: Started cri-containerd-cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2.scope. Feb 12 20:21:00.329084 env[1125]: time="2024-02-12T20:21:00.329026758Z" level=info msg="StartContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" returns successfully" Feb 12 20:21:00.464266 kubelet[1393]: I0212 20:21:00.464237 1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:21:00.617882 kernel: Initializing XFRM netlink socket Feb 12 20:21:01.174979 kubelet[1393]: E0212 20:21:01.172025 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:01.301977 kubelet[1393]: E0212 20:21:01.300088 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:01.465916 kubelet[1393]: I0212 20:21:01.465470 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cjhmw" podStartSLOduration=9.763192797 podCreationTimestamp="2024-02-12 20:20:41 +0000 UTC" firstStartedPulling="2024-02-12 20:20:45.500236345 +0000 UTC m=+5.561089375" lastFinishedPulling="2024-02-12 20:20:56.202464266 +0000 UTC m=+16.263317296" observedRunningTime="2024-02-12 20:21:01.462198704 +0000 UTC m=+21.523051744" watchObservedRunningTime="2024-02-12 20:21:01.465420718 +0000 UTC m=+21.526273748" Feb 12 20:21:02.212383 kubelet[1393]: E0212 20:21:02.212343 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:02.309928 kubelet[1393]: E0212 20:21:02.308571 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:02.484965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:21:02.472946 systemd-networkd[1023]: cilium_host: Link UP Feb 12 20:21:02.473098 systemd-networkd[1023]: cilium_net: Link UP Feb 12 20:21:02.473103 systemd-networkd[1023]: cilium_net: Gained carrier Feb 12 20:21:02.473278 systemd-networkd[1023]: cilium_host: Gained carrier Feb 12 20:21:02.473482 systemd-networkd[1023]: cilium_host: Gained IPv6LL Feb 12 20:21:02.659434 systemd-networkd[1023]: cilium_net: Gained IPv6LL Feb 12 20:21:02.817646 systemd-networkd[1023]: cilium_vxlan: Link UP Feb 12 20:21:02.817652 systemd-networkd[1023]: cilium_vxlan: Gained carrier Feb 12 20:21:03.198303 kernel: NET: Registered PF_ALG protocol family Feb 12 20:21:03.216751 kubelet[1393]: E0212 20:21:03.215993 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:03.319155 kubelet[1393]: E0212 20:21:03.316230 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:04.044008 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Feb 12 20:21:04.216577 kubelet[1393]: E0212 20:21:04.216536 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:04.217578 systemd-networkd[1023]: lxc_health: Link UP Feb 12 20:21:04.226623 systemd-networkd[1023]: lxc_health: Gained carrier Feb 12 20:21:04.227016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:21:04.488438 kubelet[1393]: E0212 20:21:04.488389 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:04.599432 kubelet[1393]: I0212 20:21:04.599387 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:04.604812 systemd[1]: Created slice kubepods-besteffort-pod3ae5ae3b_1ffe_46ed_a78a_a4dcef89aa64.slice. Feb 12 20:21:04.763975 kubelet[1393]: I0212 20:21:04.763855 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvkks\" (UniqueName: \"kubernetes.io/projected/3ae5ae3b-1ffe-46ed-a78a-a4dcef89aa64-kube-api-access-xvkks\") pod \"nginx-deployment-845c78c8b9-785xz\" (UID: \"3ae5ae3b-1ffe-46ed-a78a-a4dcef89aa64\") " pod="default/nginx-deployment-845c78c8b9-785xz" Feb 12 20:21:04.908414 env[1125]: time="2024-02-12T20:21:04.908362362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-785xz,Uid:3ae5ae3b-1ffe-46ed-a78a-a4dcef89aa64,Namespace:default,Attempt:0,}" Feb 12 20:21:04.948832 systemd-networkd[1023]: lxc8d2c57c38c39: Link UP Feb 12 20:21:04.955867 kernel: eth0: renamed from tmp973ac Feb 12 20:21:04.961965 systemd-networkd[1023]: lxc8d2c57c38c39: Gained carrier Feb 12 20:21:04.962931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8d2c57c38c39: link becomes ready Feb 12 20:21:05.217103 kubelet[1393]: E0212 20:21:05.217069 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:05.962999 systemd-networkd[1023]: lxc_health: Gained IPv6LL Feb 12 20:21:06.217584 kubelet[1393]: E0212 20:21:06.217474 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:06.794981 systemd-networkd[1023]: lxc8d2c57c38c39: Gained IPv6LL Feb 12 20:21:07.217745 kubelet[1393]: E0212 20:21:07.217708 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:07.862900 kubelet[1393]: I0212 20:21:07.862858 1393 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:21:07.863546 kubelet[1393]: E0212 20:21:07.863532 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:08.168312 env[1125]: time="2024-02-12T20:21:08.168195187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:08.168588 env[1125]: time="2024-02-12T20:21:08.168240161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:08.168588 env[1125]: time="2024-02-12T20:21:08.168253186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:08.168736 env[1125]: time="2024-02-12T20:21:08.168584848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/973aca29dd28e67d92cd00e6d8cba24fecdbfa94d3c15de6193502d741c752d2 pid=2475 runtime=io.containerd.runc.v2 Feb 12 20:21:08.182742 systemd[1]: Started cri-containerd-973aca29dd28e67d92cd00e6d8cba24fecdbfa94d3c15de6193502d741c752d2.scope. Feb 12 20:21:08.193640 systemd-resolved[1075]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:21:08.213152 env[1125]: time="2024-02-12T20:21:08.213109784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-785xz,Uid:3ae5ae3b-1ffe-46ed-a78a-a4dcef89aa64,Namespace:default,Attempt:0,} returns sandbox id \"973aca29dd28e67d92cd00e6d8cba24fecdbfa94d3c15de6193502d741c752d2\"" Feb 12 20:21:08.214465 env[1125]: time="2024-02-12T20:21:08.214438095Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:21:08.218230 kubelet[1393]: E0212 20:21:08.218207 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:08.320358 kubelet[1393]: E0212 20:21:08.320332 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:09.218522 kubelet[1393]: E0212 20:21:09.218477 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:10.218695 kubelet[1393]: E0212 20:21:10.218646 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:11.066185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381156152.mount: Deactivated successfully. Feb 12 20:21:11.219231 kubelet[1393]: E0212 20:21:11.219182 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:11.892883 env[1125]: time="2024-02-12T20:21:11.892809584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:11.894653 env[1125]: time="2024-02-12T20:21:11.894613355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:11.896540 env[1125]: time="2024-02-12T20:21:11.896490255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:11.899270 env[1125]: time="2024-02-12T20:21:11.899243928Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:11.899885 env[1125]: time="2024-02-12T20:21:11.899856312Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:21:11.901541 env[1125]: time="2024-02-12T20:21:11.901504675Z" level=info msg="CreateContainer within sandbox \"973aca29dd28e67d92cd00e6d8cba24fecdbfa94d3c15de6193502d741c752d2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:21:11.913297 env[1125]: time="2024-02-12T20:21:11.913251531Z" level=info msg="CreateContainer within sandbox \"973aca29dd28e67d92cd00e6d8cba24fecdbfa94d3c15de6193502d741c752d2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8395083d86fcb6aaee1ae0498754afb61defbec3eeccaad6190946505af7360e\"" Feb 12 20:21:11.913674 env[1125]: time="2024-02-12T20:21:11.913649534Z" level=info msg="StartContainer for \"8395083d86fcb6aaee1ae0498754afb61defbec3eeccaad6190946505af7360e\"" Feb 12 20:21:11.927477 systemd[1]: Started cri-containerd-8395083d86fcb6aaee1ae0498754afb61defbec3eeccaad6190946505af7360e.scope. Feb 12 20:21:11.947436 env[1125]: time="2024-02-12T20:21:11.947391897Z" level=info msg="StartContainer for \"8395083d86fcb6aaee1ae0498754afb61defbec3eeccaad6190946505af7360e\" returns successfully" Feb 12 20:21:12.219314 kubelet[1393]: E0212 20:21:12.219292 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:12.333236 kubelet[1393]: I0212 20:21:12.333206 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-785xz" podStartSLOduration=4.647152101 podCreationTimestamp="2024-02-12 20:21:04 +0000 UTC" firstStartedPulling="2024-02-12 20:21:08.214054055 +0000 UTC m=+28.274907085" lastFinishedPulling="2024-02-12 20:21:11.900075874 +0000 UTC m=+31.960928904" observedRunningTime="2024-02-12 20:21:12.332861643 +0000 UTC m=+32.393714673" watchObservedRunningTime="2024-02-12 20:21:12.33317392 +0000 UTC m=+32.394026950" Feb 12 20:21:13.220492 kubelet[1393]: E0212 20:21:13.220433 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:14.220798 kubelet[1393]: E0212 20:21:14.220732 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:15.220898 kubelet[1393]: E0212 20:21:15.220862 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:16.143998 kubelet[1393]: I0212 20:21:16.143948 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:16.148445 systemd[1]: Created slice kubepods-besteffort-podd8012442_7630_498c_8d05_1a3a96937d19.slice. Feb 12 20:21:16.221792 kubelet[1393]: E0212 20:21:16.221724 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:16.311046 kubelet[1393]: I0212 20:21:16.310988 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d8012442-7630-498c-8d05-1a3a96937d19-data\") pod \"nfs-server-provisioner-0\" (UID: \"d8012442-7630-498c-8d05-1a3a96937d19\") " pod="default/nfs-server-provisioner-0" Feb 12 20:21:16.311046 kubelet[1393]: I0212 20:21:16.311057 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xpm\" (UniqueName: \"kubernetes.io/projected/d8012442-7630-498c-8d05-1a3a96937d19-kube-api-access-b9xpm\") pod \"nfs-server-provisioner-0\" (UID: \"d8012442-7630-498c-8d05-1a3a96937d19\") " pod="default/nfs-server-provisioner-0" Feb 12 20:21:16.450928 env[1125]: time="2024-02-12T20:21:16.450885819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d8012442-7630-498c-8d05-1a3a96937d19,Namespace:default,Attempt:0,}" Feb 12 20:21:16.715093 systemd-networkd[1023]: lxcd7e34d519cf2: Link UP Feb 12 20:21:16.720864 kernel: eth0: renamed from tmp3229f Feb 12 20:21:16.729127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:21:16.729202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd7e34d519cf2: link becomes ready Feb 12 20:21:16.729160 systemd-networkd[1023]: lxcd7e34d519cf2: Gained carrier Feb 12 20:21:16.946867 env[1125]: time="2024-02-12T20:21:16.946785625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:16.946867 env[1125]: time="2024-02-12T20:21:16.946825392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:16.946867 env[1125]: time="2024-02-12T20:21:16.946836062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:16.947063 env[1125]: time="2024-02-12T20:21:16.946966100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a pid=2605 runtime=io.containerd.runc.v2 Feb 12 20:21:16.960409 systemd[1]: Started cri-containerd-3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a.scope. Feb 12 20:21:16.971627 systemd-resolved[1075]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:21:16.991626 env[1125]: time="2024-02-12T20:21:16.991572539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d8012442-7630-498c-8d05-1a3a96937d19,Namespace:default,Attempt:0,} returns sandbox id \"3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a\"" Feb 12 20:21:16.993061 env[1125]: time="2024-02-12T20:21:16.993040297Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:21:17.222477 kubelet[1393]: E0212 20:21:17.222207 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:17.421177 systemd[1]: run-containerd-runc-k8s.io-3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a-runc.J3Jt4Q.mount: Deactivated successfully. Feb 12 20:21:18.223228 kubelet[1393]: E0212 20:21:18.223187 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:18.699040 systemd-networkd[1023]: lxcd7e34d519cf2: Gained IPv6LL Feb 12 20:21:19.223736 kubelet[1393]: E0212 20:21:19.223689 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:19.279433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075947630.mount: Deactivated successfully. Feb 12 20:21:20.158051 kubelet[1393]: E0212 20:21:20.157990 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:20.224839 kubelet[1393]: E0212 20:21:20.224805 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:21.225865 kubelet[1393]: E0212 20:21:21.225817 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:21.797044 update_engine[1115]: I0212 20:21:21.796977 1115 update_attempter.cc:509] Updating boot flags... Feb 12 20:21:22.226658 kubelet[1393]: E0212 20:21:22.226625 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:22.480630 env[1125]: time="2024-02-12T20:21:22.480528048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:22.482269 env[1125]: time="2024-02-12T20:21:22.482247540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:22.483691 env[1125]: time="2024-02-12T20:21:22.483670459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:22.485133 env[1125]: time="2024-02-12T20:21:22.485102505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:22.485671 env[1125]: time="2024-02-12T20:21:22.485650995Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 20:21:22.487116 env[1125]: time="2024-02-12T20:21:22.487094583Z" level=info msg="CreateContainer within sandbox \"3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:21:22.498246 env[1125]: time="2024-02-12T20:21:22.498204426Z" level=info msg="CreateContainer within sandbox \"3229fe654e58b76355c6742799d39e0b1e41468cc58a2a14430a24db2b1f105a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"caec715fd6e673ad2bd6cda9ccd0af58dcbee279fbb8a32eed7d64458b021932\"" Feb 12 20:21:22.498612 env[1125]: time="2024-02-12T20:21:22.498589686Z" level=info msg="StartContainer for \"caec715fd6e673ad2bd6cda9ccd0af58dcbee279fbb8a32eed7d64458b021932\"" Feb 12 20:21:22.511621 systemd[1]: Started cri-containerd-caec715fd6e673ad2bd6cda9ccd0af58dcbee279fbb8a32eed7d64458b021932.scope. Feb 12 20:21:22.594206 env[1125]: time="2024-02-12T20:21:22.594158493Z" level=info msg="StartContainer for \"caec715fd6e673ad2bd6cda9ccd0af58dcbee279fbb8a32eed7d64458b021932\" returns successfully" Feb 12 20:21:23.227153 kubelet[1393]: E0212 20:21:23.227113 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:23.353600 kubelet[1393]: I0212 20:21:23.353577 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8602701430000002 podCreationTimestamp="2024-02-12 20:21:16 +0000 UTC" firstStartedPulling="2024-02-12 20:21:16.992629383 +0000 UTC m=+37.053482413" lastFinishedPulling="2024-02-12 20:21:22.485899857 +0000 UTC m=+42.546752887" observedRunningTime="2024-02-12 20:21:23.353437031 +0000 UTC m=+43.414290061" watchObservedRunningTime="2024-02-12 20:21:23.353540617 +0000 UTC m=+43.414393647" Feb 12 20:21:24.227511 kubelet[1393]: E0212 20:21:24.227461 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:25.228099 kubelet[1393]: E0212 20:21:25.228067 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:26.228533 kubelet[1393]: E0212 20:21:26.228496 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:27.229255 kubelet[1393]: E0212 20:21:27.229207 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:28.230365 kubelet[1393]: E0212 20:21:28.230321 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:29.231218 kubelet[1393]: E0212 20:21:29.231170 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:30.232324 kubelet[1393]: E0212 20:21:30.232285 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:31.232725 kubelet[1393]: E0212 20:21:31.232662 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:32.232764 kubelet[1393]: E0212 20:21:32.232732 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:33.068560 kubelet[1393]: I0212 20:21:33.068532 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:33.072200 systemd[1]: Created slice kubepods-besteffort-podb3d39715_20e4_475d_9d63_e395267b5325.slice. Feb 12 20:21:33.233736 kubelet[1393]: E0212 20:21:33.233708 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:33.284126 kubelet[1393]: I0212 20:21:33.284099 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drb2z\" (UniqueName: \"kubernetes.io/projected/b3d39715-20e4-475d-9d63-e395267b5325-kube-api-access-drb2z\") pod \"test-pod-1\" (UID: \"b3d39715-20e4-475d-9d63-e395267b5325\") " pod="default/test-pod-1" Feb 12 20:21:33.284241 kubelet[1393]: I0212 20:21:33.284133 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77fb74a7-b39a-49a0-b618-8bc3d637341d\" (UniqueName: \"kubernetes.io/nfs/b3d39715-20e4-475d-9d63-e395267b5325-pvc-77fb74a7-b39a-49a0-b618-8bc3d637341d\") pod \"test-pod-1\" (UID: \"b3d39715-20e4-475d-9d63-e395267b5325\") " pod="default/test-pod-1" Feb 12 20:21:33.402862 kernel: FS-Cache: Loaded Feb 12 20:21:33.436226 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:21:33.436310 kernel: RPC: Registered udp transport module. Feb 12 20:21:33.436339 kernel: RPC: Registered tcp transport module. Feb 12 20:21:33.436362 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:21:33.473862 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:21:33.646088 kernel: NFS: Registering the id_resolver key type Feb 12 20:21:33.646234 kernel: Key type id_resolver registered Feb 12 20:21:33.646261 kernel: Key type id_legacy registered Feb 12 20:21:33.664805 nfsidmap[2739]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:21:33.667172 nfsidmap[2742]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 20:21:33.974727 env[1125]: time="2024-02-12T20:21:33.974654738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b3d39715-20e4-475d-9d63-e395267b5325,Namespace:default,Attempt:0,}" Feb 12 20:21:34.007145 systemd-networkd[1023]: lxcec93bf2306d9: Link UP Feb 12 20:21:34.017873 kernel: eth0: renamed from tmp89e31 Feb 12 20:21:34.025424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:21:34.025518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcec93bf2306d9: link becomes ready Feb 12 20:21:34.025243 systemd-networkd[1023]: lxcec93bf2306d9: Gained carrier Feb 12 20:21:34.234551 kubelet[1393]: E0212 20:21:34.234453 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:34.281954 env[1125]: time="2024-02-12T20:21:34.281879005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:34.281954 env[1125]: time="2024-02-12T20:21:34.281914261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:34.281954 env[1125]: time="2024-02-12T20:21:34.281923568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:34.282180 env[1125]: time="2024-02-12T20:21:34.282062501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89e31b8bb6614223913d9c114d54ebc4a6dd89eb2316b288a2fd73abec048658 pid=2778 runtime=io.containerd.runc.v2 Feb 12 20:21:34.291179 systemd[1]: Started cri-containerd-89e31b8bb6614223913d9c114d54ebc4a6dd89eb2316b288a2fd73abec048658.scope. Feb 12 20:21:34.305318 systemd-resolved[1075]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:21:34.323886 env[1125]: time="2024-02-12T20:21:34.323829739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b3d39715-20e4-475d-9d63-e395267b5325,Namespace:default,Attempt:0,} returns sandbox id \"89e31b8bb6614223913d9c114d54ebc4a6dd89eb2316b288a2fd73abec048658\"" Feb 12 20:21:34.325523 env[1125]: time="2024-02-12T20:21:34.325504416Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:21:35.055204 env[1125]: time="2024-02-12T20:21:35.055143284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:35.111995 env[1125]: time="2024-02-12T20:21:35.111919605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:35.158491 env[1125]: time="2024-02-12T20:21:35.158444705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:35.198861 env[1125]: time="2024-02-12T20:21:35.198812500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:35.199422 env[1125]: time="2024-02-12T20:21:35.199394016Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 20:21:35.201006 env[1125]: time="2024-02-12T20:21:35.200972130Z" level=info msg="CreateContainer within sandbox \"89e31b8bb6614223913d9c114d54ebc4a6dd89eb2316b288a2fd73abec048658\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:21:35.210973 systemd-networkd[1023]: lxcec93bf2306d9: Gained IPv6LL Feb 12 20:21:35.235380 kubelet[1393]: E0212 20:21:35.235339 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:35.365744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623924023.mount: Deactivated successfully. Feb 12 20:21:35.367983 env[1125]: time="2024-02-12T20:21:35.367939428Z" level=info msg="CreateContainer within sandbox \"89e31b8bb6614223913d9c114d54ebc4a6dd89eb2316b288a2fd73abec048658\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a378b2ff5543c00d2b769159d5e3610b1a451c891b9c5e728caefd36bec05624\"" Feb 12 20:21:35.368451 env[1125]: time="2024-02-12T20:21:35.368412100Z" level=info msg="StartContainer for \"a378b2ff5543c00d2b769159d5e3610b1a451c891b9c5e728caefd36bec05624\"" Feb 12 20:21:35.385429 systemd[1]: Started cri-containerd-a378b2ff5543c00d2b769159d5e3610b1a451c891b9c5e728caefd36bec05624.scope. Feb 12 20:21:35.409393 env[1125]: time="2024-02-12T20:21:35.409354217Z" level=info msg="StartContainer for \"a378b2ff5543c00d2b769159d5e3610b1a451c891b9c5e728caefd36bec05624\" returns successfully" Feb 12 20:21:36.236366 kubelet[1393]: E0212 20:21:36.236335 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:36.456449 kubelet[1393]: I0212 20:21:36.456409 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.581790278 podCreationTimestamp="2024-02-12 20:21:16 +0000 UTC" firstStartedPulling="2024-02-12 20:21:34.325060479 +0000 UTC m=+54.385913499" lastFinishedPulling="2024-02-12 20:21:35.199636593 +0000 UTC m=+55.260489623" observedRunningTime="2024-02-12 20:21:36.456080614 +0000 UTC m=+56.516933644" watchObservedRunningTime="2024-02-12 20:21:36.456366402 +0000 UTC m=+56.517219432" Feb 12 20:21:37.236977 kubelet[1393]: E0212 20:21:37.236929 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:38.237324 kubelet[1393]: E0212 20:21:38.237287 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:38.761009 env[1125]: time="2024-02-12T20:21:38.760947538Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:21:38.766700 env[1125]: time="2024-02-12T20:21:38.766657530Z" level=info msg="StopContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" with timeout 1 (s)" Feb 12 20:21:38.767010 env[1125]: time="2024-02-12T20:21:38.766978335Z" level=info msg="Stop container \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" with signal terminated" Feb 12 20:21:38.773634 systemd-networkd[1023]: lxc_health: Link DOWN Feb 12 20:21:38.773641 systemd-networkd[1023]: lxc_health: Lost carrier Feb 12 20:21:38.809513 systemd[1]: cri-containerd-cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2.scope: Deactivated successfully. Feb 12 20:21:38.809900 systemd[1]: cri-containerd-cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2.scope: Consumed 6.705s CPU time. Feb 12 20:21:38.825634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2-rootfs.mount: Deactivated successfully. Feb 12 20:21:38.833651 env[1125]: time="2024-02-12T20:21:38.833607201Z" level=info msg="shim disconnected" id=cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2 Feb 12 20:21:38.833743 env[1125]: time="2024-02-12T20:21:38.833656615Z" level=warning msg="cleaning up after shim disconnected" id=cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2 namespace=k8s.io Feb 12 20:21:38.833743 env[1125]: time="2024-02-12T20:21:38.833669529Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.839512 env[1125]: time="2024-02-12T20:21:38.839460213Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2908 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.842858 env[1125]: time="2024-02-12T20:21:38.842823727Z" level=info msg="StopContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" returns successfully" Feb 12 20:21:38.843479 env[1125]: time="2024-02-12T20:21:38.843440399Z" level=info msg="StopPodSandbox for \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\"" Feb 12 20:21:38.843527 env[1125]: time="2024-02-12T20:21:38.843509810Z" level=info msg="Container to stop \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.843527 env[1125]: time="2024-02-12T20:21:38.843523256Z" level=info msg="Container to stop \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.843586 env[1125]: time="2024-02-12T20:21:38.843534817Z" level=info msg="Container to stop \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.843586 env[1125]: time="2024-02-12T20:21:38.843545187Z" level=info msg="Container to stop \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.843586 env[1125]: time="2024-02-12T20:21:38.843566697Z" level=info msg="Container to stop \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.845360 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d-shm.mount: Deactivated successfully. Feb 12 20:21:38.847998 systemd[1]: cri-containerd-8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d.scope: Deactivated successfully. Feb 12 20:21:38.860724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d-rootfs.mount: Deactivated successfully. Feb 12 20:21:38.863512 env[1125]: time="2024-02-12T20:21:38.863475138Z" level=info msg="shim disconnected" id=8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d Feb 12 20:21:38.864091 env[1125]: time="2024-02-12T20:21:38.864064178Z" level=warning msg="cleaning up after shim disconnected" id=8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d namespace=k8s.io Feb 12 20:21:38.864091 env[1125]: time="2024-02-12T20:21:38.864079918Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.870142 env[1125]: time="2024-02-12T20:21:38.870111005Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.870392 env[1125]: time="2024-02-12T20:21:38.870368751Z" level=info msg="TearDown network for sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" successfully" Feb 12 20:21:38.870392 env[1125]: time="2024-02-12T20:21:38.870388538Z" level=info msg="StopPodSandbox for \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" returns successfully" Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017225 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-run\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017283 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-lib-modules\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017318 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee70eb59-f12d-4c79-a4e9-57b93d767abf-clustermesh-secrets\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017343 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cni-path\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017342 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.018379 kubelet[1393]: I0212 20:21:39.017374 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-config-path\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018900 kubelet[1393]: I0212 20:21:39.017432 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.018900 kubelet[1393]: I0212 20:21:39.017475 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.018900 kubelet[1393]: I0212 20:21:39.017459 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-bpf-maps\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018900 kubelet[1393]: I0212 20:21:39.017559 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-cgroup\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018900 kubelet[1393]: I0212 20:21:39.017612 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpnlq\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-kube-api-access-dpnlq\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.018900 kubelet[1393]: W0212 20:21:39.017638 1393 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ee70eb59-f12d-4c79-a4e9-57b93d767abf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017635 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hostproc\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017704 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-kernel\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017724 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hubble-tls\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017748 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-etc-cni-netd\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017765 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-xtables-lock\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019045 kubelet[1393]: I0212 20:21:39.017783 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-net\") pod \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\" (UID: \"ee70eb59-f12d-4c79-a4e9-57b93d767abf\") " Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.017820 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-run\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.017829 1393 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-lib-modules\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.017838 1393 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-bpf-maps\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.017888 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.017909 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019196 kubelet[1393]: I0212 20:21:39.018586 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hostproc" (OuterVolumeSpecName: "hostproc") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019346 kubelet[1393]: I0212 20:21:39.018611 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019346 kubelet[1393]: I0212 20:21:39.018648 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cni-path" (OuterVolumeSpecName: "cni-path") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019346 kubelet[1393]: I0212 20:21:39.018673 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.019346 kubelet[1393]: I0212 20:21:39.018694 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:39.020689 kubelet[1393]: I0212 20:21:39.020595 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:21:39.020689 kubelet[1393]: I0212 20:21:39.020691 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee70eb59-f12d-4c79-a4e9-57b93d767abf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:39.021489 kubelet[1393]: I0212 20:21:39.021457 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-kube-api-access-dpnlq" (OuterVolumeSpecName: "kube-api-access-dpnlq") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "kube-api-access-dpnlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:39.021871 systemd[1]: var-lib-kubelet-pods-ee70eb59\x2df12d\x2d4c79\x2da4e9\x2d57b93d767abf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:39.024034 kubelet[1393]: I0212 20:21:39.023991 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ee70eb59-f12d-4c79-a4e9-57b93d767abf" (UID: "ee70eb59-f12d-4c79-a4e9-57b93d767abf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:39.118495 kubelet[1393]: I0212 20:21:39.118310 1393 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-kernel\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118495 kubelet[1393]: I0212 20:21:39.118470 1393 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hubble-tls\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118495 kubelet[1393]: I0212 20:21:39.118484 1393 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-etc-cni-netd\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118495 kubelet[1393]: I0212 20:21:39.118497 1393 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-xtables-lock\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118495 kubelet[1393]: I0212 20:21:39.118510 1393 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-host-proc-sys-net\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118522 1393 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee70eb59-f12d-4c79-a4e9-57b93d767abf-clustermesh-secrets\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118533 1393 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cni-path\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118554 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-config-path\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118563 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-cilium-cgroup\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118575 1393 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dpnlq\" (UniqueName: \"kubernetes.io/projected/ee70eb59-f12d-4c79-a4e9-57b93d767abf-kube-api-access-dpnlq\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.118824 kubelet[1393]: I0212 20:21:39.118587 1393 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee70eb59-f12d-4c79-a4e9-57b93d767abf-hostproc\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:39.238178 kubelet[1393]: E0212 20:21:39.238090 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:39.374606 kubelet[1393]: I0212 20:21:39.374500 1393 scope.go:115] "RemoveContainer" containerID="cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2" Feb 12 20:21:39.376311 env[1125]: time="2024-02-12T20:21:39.376278382Z" level=info msg="RemoveContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\"" Feb 12 20:21:39.377240 systemd[1]: Removed slice kubepods-burstable-podee70eb59_f12d_4c79_a4e9_57b93d767abf.slice. Feb 12 20:21:39.377311 systemd[1]: kubepods-burstable-podee70eb59_f12d_4c79_a4e9_57b93d767abf.slice: Consumed 6.799s CPU time. Feb 12 20:21:39.538575 env[1125]: time="2024-02-12T20:21:39.538502924Z" level=info msg="RemoveContainer for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" returns successfully" Feb 12 20:21:39.538934 kubelet[1393]: I0212 20:21:39.538905 1393 scope.go:115] "RemoveContainer" containerID="1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0" Feb 12 20:21:39.540107 env[1125]: time="2024-02-12T20:21:39.540074404Z" level=info msg="RemoveContainer for \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\"" Feb 12 20:21:39.561312 env[1125]: time="2024-02-12T20:21:39.561259340Z" level=info msg="RemoveContainer for \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\" returns successfully" Feb 12 20:21:39.561594 kubelet[1393]: I0212 20:21:39.561565 1393 scope.go:115] "RemoveContainer" containerID="96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165" Feb 12 20:21:39.562770 env[1125]: time="2024-02-12T20:21:39.562729026Z" level=info msg="RemoveContainer for \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\"" Feb 12 20:21:39.566608 env[1125]: time="2024-02-12T20:21:39.566574386Z" level=info msg="RemoveContainer for \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\" returns successfully" Feb 12 20:21:39.566770 kubelet[1393]: I0212 20:21:39.566739 1393 scope.go:115] "RemoveContainer" containerID="80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13" Feb 12 20:21:39.567859 env[1125]: time="2024-02-12T20:21:39.567818829Z" level=info msg="RemoveContainer for \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\"" Feb 12 20:21:39.570355 env[1125]: time="2024-02-12T20:21:39.570322412Z" level=info msg="RemoveContainer for \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\" returns successfully" Feb 12 20:21:39.570530 kubelet[1393]: I0212 20:21:39.570502 1393 scope.go:115] "RemoveContainer" containerID="dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557" Feb 12 20:21:39.571521 env[1125]: time="2024-02-12T20:21:39.571488147Z" level=info msg="RemoveContainer for \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\"" Feb 12 20:21:39.573965 env[1125]: time="2024-02-12T20:21:39.573926629Z" level=info msg="RemoveContainer for \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\" returns successfully" Feb 12 20:21:39.574180 kubelet[1393]: I0212 20:21:39.574141 1393 scope.go:115] "RemoveContainer" containerID="cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2" Feb 12 20:21:39.574481 env[1125]: time="2024-02-12T20:21:39.574413826Z" level=error msg="ContainerStatus for \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\": not found" Feb 12 20:21:39.574662 kubelet[1393]: E0212 20:21:39.574632 1393 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\": not found" containerID="cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2" Feb 12 20:21:39.574711 kubelet[1393]: I0212 20:21:39.574683 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2} err="failed to get container status \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb6c3d9e5cff7fd895d0b4737995345d6b12947f96fb697d02ba53754aba87c2\": not found" Feb 12 20:21:39.574711 kubelet[1393]: I0212 20:21:39.574700 1393 scope.go:115] "RemoveContainer" containerID="1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0" Feb 12 20:21:39.574943 env[1125]: time="2024-02-12T20:21:39.574884682Z" level=error msg="ContainerStatus for \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\": not found" Feb 12 20:21:39.575123 kubelet[1393]: E0212 20:21:39.575017 1393 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\": not found" containerID="1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0" Feb 12 20:21:39.575123 kubelet[1393]: I0212 20:21:39.575045 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0} err="failed to get container status \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"1818b55f3d222f963882e544e204dd3d85b103d7f873750e3b855be4bc7727a0\": not found" Feb 12 20:21:39.575123 kubelet[1393]: I0212 20:21:39.575054 1393 scope.go:115] "RemoveContainer" containerID="96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165" Feb 12 20:21:39.575376 env[1125]: time="2024-02-12T20:21:39.575311495Z" level=error msg="ContainerStatus for \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\": not found" Feb 12 20:21:39.575472 kubelet[1393]: E0212 20:21:39.575456 1393 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\": not found" containerID="96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165" Feb 12 20:21:39.575472 kubelet[1393]: I0212 20:21:39.575475 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165} err="failed to get container status \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\": rpc error: code = NotFound desc = an error occurred when try to find container \"96bc708e29fc737db0832cd1325482f4bd0f6817fbc6406e2995d6f9e5490165\": not found" Feb 12 20:21:39.575580 kubelet[1393]: I0212 20:21:39.575491 1393 scope.go:115] "RemoveContainer" containerID="80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13" Feb 12 20:21:39.575708 env[1125]: time="2024-02-12T20:21:39.575660012Z" level=error msg="ContainerStatus for \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\": not found" Feb 12 20:21:39.575805 kubelet[1393]: E0212 20:21:39.575790 1393 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\": not found" containerID="80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13" Feb 12 20:21:39.575865 kubelet[1393]: I0212 20:21:39.575822 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13} err="failed to get container status \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\": rpc error: code = NotFound desc = an error occurred when try to find container \"80d88e825d6d68e5d9d52f3ad8aadbef462d27b992d0cf50144e535078782c13\": not found" Feb 12 20:21:39.575865 kubelet[1393]: I0212 20:21:39.575830 1393 scope.go:115] "RemoveContainer" containerID="dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557" Feb 12 20:21:39.576030 env[1125]: time="2024-02-12T20:21:39.575983962Z" level=error msg="ContainerStatus for \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\": not found" Feb 12 20:21:39.576151 kubelet[1393]: E0212 20:21:39.576131 1393 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\": not found" containerID="dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557" Feb 12 20:21:39.576216 kubelet[1393]: I0212 20:21:39.576155 1393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557} err="failed to get container status \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\": rpc error: code = NotFound desc = an error occurred when try to find container \"dafc411c6833fd80db69c6b2f05b2f88a76c10fd7e653948c237a4447c5cc557\": not found" Feb 12 20:21:39.746469 systemd[1]: var-lib-kubelet-pods-ee70eb59\x2df12d\x2d4c79\x2da4e9\x2d57b93d767abf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpnlq.mount: Deactivated successfully. Feb 12 20:21:39.746604 systemd[1]: var-lib-kubelet-pods-ee70eb59\x2df12d\x2d4c79\x2da4e9\x2d57b93d767abf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:21:40.158449 kubelet[1393]: E0212 20:21:40.158313 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:40.163192 env[1125]: time="2024-02-12T20:21:40.163146612Z" level=info msg="StopPodSandbox for \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\"" Feb 12 20:21:40.163489 env[1125]: time="2024-02-12T20:21:40.163244006Z" level=info msg="TearDown network for sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" successfully" Feb 12 20:21:40.163489 env[1125]: time="2024-02-12T20:21:40.163284682Z" level=info msg="StopPodSandbox for \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" returns successfully" Feb 12 20:21:40.164065 env[1125]: time="2024-02-12T20:21:40.163596078Z" level=info msg="RemovePodSandbox for \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\"" Feb 12 20:21:40.164065 env[1125]: time="2024-02-12T20:21:40.163616767Z" level=info msg="Forcibly stopping sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\"" Feb 12 20:21:40.164065 env[1125]: time="2024-02-12T20:21:40.163664256Z" level=info msg="TearDown network for sandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" successfully" Feb 12 20:21:40.166423 env[1125]: time="2024-02-12T20:21:40.166366062Z" level=info msg="RemovePodSandbox \"8ca6837d76160c300491d2b0f6caf8ede3e3fcce140a1669da6228779de13c2d\" returns successfully" Feb 12 20:21:40.219389 kubelet[1393]: E0212 20:21:40.219364 1393 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:21:40.233001 kubelet[1393]: I0212 20:21:40.232986 1393 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ee70eb59-f12d-4c79-a4e9-57b93d767abf path="/var/lib/kubelet/pods/ee70eb59-f12d-4c79-a4e9-57b93d767abf/volumes" Feb 12 20:21:40.238500 kubelet[1393]: E0212 20:21:40.238464 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:41.239559 kubelet[1393]: E0212 20:21:41.239490 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:41.295207 kubelet[1393]: I0212 20:21:41.295176 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:41.295397 kubelet[1393]: E0212 20:21:41.295243 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="cilium-agent" Feb 12 20:21:41.295397 kubelet[1393]: E0212 20:21:41.295255 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="mount-cgroup" Feb 12 20:21:41.295397 kubelet[1393]: E0212 20:21:41.295267 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="apply-sysctl-overwrites" Feb 12 20:21:41.295397 kubelet[1393]: E0212 20:21:41.295275 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="clean-cilium-state" Feb 12 20:21:41.295397 kubelet[1393]: E0212 20:21:41.295283 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="mount-bpf-fs" Feb 12 20:21:41.295397 kubelet[1393]: I0212 20:21:41.295300 1393 memory_manager.go:346] "RemoveStaleState removing state" podUID="ee70eb59-f12d-4c79-a4e9-57b93d767abf" containerName="cilium-agent" Feb 12 20:21:41.295397 kubelet[1393]: I0212 20:21:41.295368 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:41.299858 systemd[1]: Created slice kubepods-besteffort-pod165fb37a_90c9_44c1_8728_b346ed39d88a.slice. Feb 12 20:21:41.303670 systemd[1]: Created slice kubepods-burstable-podf0f1f742_c02a_4180_9ac8_124ecae06c54.slice. Feb 12 20:21:41.430183 kubelet[1393]: I0212 20:21:41.430140 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-cgroup\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430183 kubelet[1393]: I0212 20:21:41.430190 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cni-path\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430370 kubelet[1393]: I0212 20:21:41.430215 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-etc-cni-netd\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430370 kubelet[1393]: I0212 20:21:41.430328 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/165fb37a-90c9-44c1-8728-b346ed39d88a-cilium-config-path\") pod \"cilium-operator-574c4bb98d-v4ljc\" (UID: \"165fb37a-90c9-44c1-8728-b346ed39d88a\") " pod="kube-system/cilium-operator-574c4bb98d-v4ljc" Feb 12 20:21:41.430370 kubelet[1393]: I0212 20:21:41.430364 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-kube-api-access-6952s\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430449 kubelet[1393]: I0212 20:21:41.430391 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-run\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430449 kubelet[1393]: I0212 20:21:41.430421 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-clustermesh-secrets\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430449 kubelet[1393]: I0212 20:21:41.430446 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-net\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430521 kubelet[1393]: I0212 20:21:41.430470 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-kernel\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430521 kubelet[1393]: I0212 20:21:41.430493 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-hubble-tls\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430600 kubelet[1393]: I0212 20:21:41.430548 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-lib-modules\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430600 kubelet[1393]: I0212 20:21:41.430586 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-config-path\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430643 kubelet[1393]: I0212 20:21:41.430613 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmst\" (UniqueName: \"kubernetes.io/projected/165fb37a-90c9-44c1-8728-b346ed39d88a-kube-api-access-ggmst\") pod \"cilium-operator-574c4bb98d-v4ljc\" (UID: \"165fb37a-90c9-44c1-8728-b346ed39d88a\") " pod="kube-system/cilium-operator-574c4bb98d-v4ljc" Feb 12 20:21:41.430667 kubelet[1393]: I0212 20:21:41.430652 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-ipsec-secrets\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430691 kubelet[1393]: I0212 20:21:41.430685 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-bpf-maps\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430727 kubelet[1393]: I0212 20:21:41.430716 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-hostproc\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.430753 kubelet[1393]: I0212 20:21:41.430742 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-xtables-lock\") pod \"cilium-hp88b\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " pod="kube-system/cilium-hp88b" Feb 12 20:21:41.603167 kubelet[1393]: E0212 20:21:41.602363 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:41.603296 env[1125]: time="2024-02-12T20:21:41.602961172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-v4ljc,Uid:165fb37a-90c9-44c1-8728-b346ed39d88a,Namespace:kube-system,Attempt:0,}" Feb 12 20:21:41.612093 kubelet[1393]: E0212 20:21:41.611384 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:41.612563 env[1125]: time="2024-02-12T20:21:41.611830763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp88b,Uid:f0f1f742-c02a-4180-9ac8-124ecae06c54,Namespace:kube-system,Attempt:0,}" Feb 12 20:21:41.615767 env[1125]: time="2024-02-12T20:21:41.615708361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:41.615767 env[1125]: time="2024-02-12T20:21:41.615746171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:41.615767 env[1125]: time="2024-02-12T20:21:41.615756181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:41.615975 env[1125]: time="2024-02-12T20:21:41.615916503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d429a3b821980b4e48065dbc44c0b3b853553daaddb6faf44aab5573a5834310 pid=2968 runtime=io.containerd.runc.v2 Feb 12 20:21:41.623480 env[1125]: time="2024-02-12T20:21:41.623346355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:41.623480 env[1125]: time="2024-02-12T20:21:41.623381521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:41.623480 env[1125]: time="2024-02-12T20:21:41.623390799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:41.623895 env[1125]: time="2024-02-12T20:21:41.623560478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6 pid=2991 runtime=io.containerd.runc.v2 Feb 12 20:21:41.626413 systemd[1]: Started cri-containerd-d429a3b821980b4e48065dbc44c0b3b853553daaddb6faf44aab5573a5834310.scope. Feb 12 20:21:41.633427 systemd[1]: Started cri-containerd-4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6.scope. Feb 12 20:21:41.651043 env[1125]: time="2024-02-12T20:21:41.650989577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp88b,Uid:f0f1f742-c02a-4180-9ac8-124ecae06c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\"" Feb 12 20:21:41.651604 kubelet[1393]: E0212 20:21:41.651588 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:41.653155 env[1125]: time="2024-02-12T20:21:41.653132689Z" level=info msg="CreateContainer within sandbox \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:21:41.661795 env[1125]: time="2024-02-12T20:21:41.661752862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-v4ljc,Uid:165fb37a-90c9-44c1-8728-b346ed39d88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d429a3b821980b4e48065dbc44c0b3b853553daaddb6faf44aab5573a5834310\"" Feb 12 20:21:41.662275 kubelet[1393]: E0212 20:21:41.662254 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:41.662917 env[1125]: time="2024-02-12T20:21:41.662898298Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:21:41.667325 env[1125]: time="2024-02-12T20:21:41.667299590Z" level=info msg="CreateContainer within sandbox \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\"" Feb 12 20:21:41.667644 env[1125]: time="2024-02-12T20:21:41.667621797Z" level=info msg="StartContainer for \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\"" Feb 12 20:21:41.680414 systemd[1]: Started cri-containerd-a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a.scope. Feb 12 20:21:41.689817 systemd[1]: cri-containerd-a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a.scope: Deactivated successfully. Feb 12 20:21:41.690051 systemd[1]: Stopped cri-containerd-a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a.scope. Feb 12 20:21:41.703414 env[1125]: time="2024-02-12T20:21:41.703357257Z" level=info msg="shim disconnected" id=a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a Feb 12 20:21:41.703414 env[1125]: time="2024-02-12T20:21:41.703414635Z" level=warning msg="cleaning up after shim disconnected" id=a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a namespace=k8s.io Feb 12 20:21:41.703414 env[1125]: time="2024-02-12T20:21:41.703425305Z" level=info msg="cleaning up dead shim" Feb 12 20:21:41.709031 env[1125]: time="2024-02-12T20:21:41.708987893Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3065 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:21:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:21:41.709336 env[1125]: time="2024-02-12T20:21:41.709233446Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 12 20:21:41.709502 env[1125]: time="2024-02-12T20:21:41.709439032Z" level=error msg="Failed to pipe stdout of container \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\"" error="reading from a closed fifo" Feb 12 20:21:41.711335 env[1125]: time="2024-02-12T20:21:41.711298301Z" level=error msg="Failed to pipe stderr of container \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\"" error="reading from a closed fifo" Feb 12 20:21:41.713389 env[1125]: time="2024-02-12T20:21:41.713331327Z" level=error msg="StartContainer for \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:21:41.713633 kubelet[1393]: E0212 20:21:41.713602 1393 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a" Feb 12 20:21:41.713770 kubelet[1393]: E0212 20:21:41.713750 1393 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:21:41.713770 kubelet[1393]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:21:41.713770 kubelet[1393]: rm /hostbin/cilium-mount Feb 12 20:21:41.713895 kubelet[1393]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6952s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-hp88b_kube-system(f0f1f742-c02a-4180-9ac8-124ecae06c54): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:21:41.713895 kubelet[1393]: E0212 20:21:41.713786 1393 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hp88b" podUID=f0f1f742-c02a-4180-9ac8-124ecae06c54 Feb 12 20:21:42.240136 kubelet[1393]: E0212 20:21:42.240087 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:42.381490 env[1125]: time="2024-02-12T20:21:42.381439719Z" level=info msg="StopPodSandbox for \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\"" Feb 12 20:21:42.381650 env[1125]: time="2024-02-12T20:21:42.381506494Z" level=info msg="Container to stop \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:42.385948 systemd[1]: cri-containerd-4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6.scope: Deactivated successfully. Feb 12 20:21:42.402401 env[1125]: time="2024-02-12T20:21:42.402337628Z" level=info msg="shim disconnected" id=4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6 Feb 12 20:21:42.402401 env[1125]: time="2024-02-12T20:21:42.402378896Z" level=warning msg="cleaning up after shim disconnected" id=4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6 namespace=k8s.io Feb 12 20:21:42.402401 env[1125]: time="2024-02-12T20:21:42.402389405Z" level=info msg="cleaning up dead shim" Feb 12 20:21:42.408032 env[1125]: time="2024-02-12T20:21:42.407996114Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3095 runtime=io.containerd.runc.v2\n" Feb 12 20:21:42.408297 env[1125]: time="2024-02-12T20:21:42.408272665Z" level=info msg="TearDown network for sandbox \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\" successfully" Feb 12 20:21:42.408353 env[1125]: time="2024-02-12T20:21:42.408296399Z" level=info msg="StopPodSandbox for \"4b8696a15ebf953a08e79c71220aa630f4dc97bb6c06cc438783b307d5109db6\" returns successfully" Feb 12 20:21:42.539640 kubelet[1393]: I0212 20:21:42.539552 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-run\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539640 kubelet[1393]: I0212 20:21:42.539593 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-config-path\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539640 kubelet[1393]: I0212 20:21:42.539609 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-bpf-maps\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539640 kubelet[1393]: I0212 20:21:42.539624 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-cgroup\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539640 kubelet[1393]: I0212 20:21:42.539639 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cni-path\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539657 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-hubble-tls\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539672 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-xtables-lock\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539671 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539687 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-hostproc\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539705 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-etc-cni-netd\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539705 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cni-path" (OuterVolumeSpecName: "cni-path") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539719 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539730 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539723 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-kube-api-access-6952s\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539742 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539754 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-kernel\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539771 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-ipsec-secrets\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539786 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-net\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539804 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-clustermesh-secrets\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.539920 kubelet[1393]: I0212 20:21:42.539818 1393 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-lib-modules\") pod \"f0f1f742-c02a-4180-9ac8-124ecae06c54\" (UID: \"f0f1f742-c02a-4180-9ac8-124ecae06c54\") " Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539853 1393 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-xtables-lock\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539863 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-run\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539871 1393 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-bpf-maps\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539879 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-cgroup\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539887 1393 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-cni-path\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539903 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539919 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-hostproc" (OuterVolumeSpecName: "hostproc") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.540357 kubelet[1393]: I0212 20:21:42.539931 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.540704 kubelet[1393]: I0212 20:21:42.540685 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.542584 kubelet[1393]: I0212 20:21:42.540779 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:42.542697 kubelet[1393]: W0212 20:21:42.541123 1393 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f0f1f742-c02a-4180-9ac8-124ecae06c54/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:21:42.542703 systemd[1]: var-lib-kubelet-pods-f0f1f742\x2dc02a\x2d4180\x2d9ac8\x2d124ecae06c54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6952s.mount: Deactivated successfully. Feb 12 20:21:42.544084 systemd[1]: var-lib-kubelet-pods-f0f1f742\x2dc02a\x2d4180\x2d9ac8\x2d124ecae06c54-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:42.544147 systemd[1]: var-lib-kubelet-pods-f0f1f742\x2dc02a\x2d4180\x2d9ac8\x2d124ecae06c54-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:42.545268 kubelet[1393]: I0212 20:21:42.545244 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:21:42.545518 kubelet[1393]: I0212 20:21:42.545500 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:42.545518 kubelet[1393]: I0212 20:21:42.545506 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:42.545556 systemd[1]: var-lib-kubelet-pods-f0f1f742\x2dc02a\x2d4180\x2d9ac8\x2d124ecae06c54-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:21:42.545728 kubelet[1393]: I0212 20:21:42.545706 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:42.546209 kubelet[1393]: I0212 20:21:42.546180 1393 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-kube-api-access-6952s" (OuterVolumeSpecName: "kube-api-access-6952s") pod "f0f1f742-c02a-4180-9ac8-124ecae06c54" (UID: "f0f1f742-c02a-4180-9ac8-124ecae06c54"). InnerVolumeSpecName "kube-api-access-6952s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:42.640973 kubelet[1393]: I0212 20:21:42.640927 1393 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-net\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.640973 kubelet[1393]: I0212 20:21:42.640957 1393 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-clustermesh-secrets\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.640973 kubelet[1393]: I0212 20:21:42.640968 1393 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-lib-modules\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.640973 kubelet[1393]: I0212 20:21:42.640978 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-config-path\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.640990 1393 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-hubble-tls\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.641000 1393 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-hostproc\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.641010 1393 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-etc-cni-netd\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.641032 1393 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6952s\" (UniqueName: \"kubernetes.io/projected/f0f1f742-c02a-4180-9ac8-124ecae06c54-kube-api-access-6952s\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.641042 1393 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0f1f742-c02a-4180-9ac8-124ecae06c54-host-proc-sys-kernel\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:42.641163 kubelet[1393]: I0212 20:21:42.641051 1393 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0f1f742-c02a-4180-9ac8-124ecae06c54-cilium-ipsec-secrets\") on node \"10.0.0.38\" DevicePath \"\"" Feb 12 20:21:43.148252 kubelet[1393]: I0212 20:21:43.148220 1393 setters.go:548] "Node became not ready" node="10.0.0.38" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:21:43.148153764 +0000 UTC m=+63.209006794 LastTransitionTime:2024-02-12 20:21:43.148153764 +0000 UTC m=+63.209006794 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:21:43.240968 kubelet[1393]: E0212 20:21:43.240912 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:43.383833 kubelet[1393]: I0212 20:21:43.383794 1393 scope.go:115] "RemoveContainer" containerID="a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a" Feb 12 20:21:43.385201 env[1125]: time="2024-02-12T20:21:43.385164783Z" level=info msg="RemoveContainer for \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\"" Feb 12 20:21:43.387757 systemd[1]: Removed slice kubepods-burstable-podf0f1f742_c02a_4180_9ac8_124ecae06c54.slice. Feb 12 20:21:43.388357 env[1125]: time="2024-02-12T20:21:43.388327161Z" level=info msg="RemoveContainer for \"a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a\" returns successfully" Feb 12 20:21:43.413114 kubelet[1393]: I0212 20:21:43.412968 1393 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:43.413114 kubelet[1393]: E0212 20:21:43.413045 1393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0f1f742-c02a-4180-9ac8-124ecae06c54" containerName="mount-cgroup" Feb 12 20:21:43.413114 kubelet[1393]: I0212 20:21:43.413066 1393 memory_manager.go:346] "RemoveStaleState removing state" podUID="f0f1f742-c02a-4180-9ac8-124ecae06c54" containerName="mount-cgroup" Feb 12 20:21:43.418255 systemd[1]: Created slice kubepods-burstable-poda24311e9_2d79_4955_93b4_5d6be9af7aa5.slice. Feb 12 20:21:43.545653 kubelet[1393]: I0212 20:21:43.545589 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-host-proc-sys-net\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545653 kubelet[1393]: I0212 20:21:43.545637 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh5z\" (UniqueName: \"kubernetes.io/projected/a24311e9-2d79-4955-93b4-5d6be9af7aa5-kube-api-access-zjh5z\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545653 kubelet[1393]: I0212 20:21:43.545657 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-hostproc\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545943 kubelet[1393]: I0212 20:21:43.545695 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-xtables-lock\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545943 kubelet[1393]: I0212 20:21:43.545714 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-cilium-run\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545943 kubelet[1393]: I0212 20:21:43.545746 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-bpf-maps\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545943 kubelet[1393]: I0212 20:21:43.545776 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a24311e9-2d79-4955-93b4-5d6be9af7aa5-cilium-config-path\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.545943 kubelet[1393]: I0212 20:21:43.545796 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a24311e9-2d79-4955-93b4-5d6be9af7aa5-clustermesh-secrets\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.545945 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a24311e9-2d79-4955-93b4-5d6be9af7aa5-cilium-ipsec-secrets\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.545989 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-cilium-cgroup\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.546018 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-cni-path\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.546061 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-lib-modules\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.546091 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-etc-cni-netd\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546123 kubelet[1393]: I0212 20:21:43.546114 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a24311e9-2d79-4955-93b4-5d6be9af7aa5-host-proc-sys-kernel\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.546329 kubelet[1393]: I0212 20:21:43.546131 1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a24311e9-2d79-4955-93b4-5d6be9af7aa5-hubble-tls\") pod \"cilium-cmj4t\" (UID: \"a24311e9-2d79-4955-93b4-5d6be9af7aa5\") " pod="kube-system/cilium-cmj4t" Feb 12 20:21:43.568981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958432868.mount: Deactivated successfully. Feb 12 20:21:43.729741 kubelet[1393]: E0212 20:21:43.729695 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:43.730308 env[1125]: time="2024-02-12T20:21:43.730266084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmj4t,Uid:a24311e9-2d79-4955-93b4-5d6be9af7aa5,Namespace:kube-system,Attempt:0,}" Feb 12 20:21:43.742543 env[1125]: time="2024-02-12T20:21:43.742477675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:43.742543 env[1125]: time="2024-02-12T20:21:43.742515216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:43.742543 env[1125]: time="2024-02-12T20:21:43.742525025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:43.742716 env[1125]: time="2024-02-12T20:21:43.742657573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d pid=3127 runtime=io.containerd.runc.v2 Feb 12 20:21:43.755564 systemd[1]: Started cri-containerd-e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d.scope. Feb 12 20:21:43.776131 env[1125]: time="2024-02-12T20:21:43.776080036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmj4t,Uid:a24311e9-2d79-4955-93b4-5d6be9af7aa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\"" Feb 12 20:21:43.777088 kubelet[1393]: E0212 20:21:43.777064 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:43.779581 env[1125]: time="2024-02-12T20:21:43.779538823Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:21:43.790874 env[1125]: time="2024-02-12T20:21:43.790807500Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9\"" Feb 12 20:21:43.791355 env[1125]: time="2024-02-12T20:21:43.791329181Z" level=info msg="StartContainer for \"bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9\"" Feb 12 20:21:43.805079 systemd[1]: Started cri-containerd-bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9.scope. Feb 12 20:21:43.826499 env[1125]: time="2024-02-12T20:21:43.826428468Z" level=info msg="StartContainer for \"bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9\" returns successfully" Feb 12 20:21:43.833547 systemd[1]: cri-containerd-bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9.scope: Deactivated successfully. Feb 12 20:21:43.906551 env[1125]: time="2024-02-12T20:21:43.906502918Z" level=info msg="shim disconnected" id=bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9 Feb 12 20:21:43.906551 env[1125]: time="2024-02-12T20:21:43.906547813Z" level=warning msg="cleaning up after shim disconnected" id=bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9 namespace=k8s.io Feb 12 20:21:43.906551 env[1125]: time="2024-02-12T20:21:43.906555938Z" level=info msg="cleaning up dead shim" Feb 12 20:21:43.912426 env[1125]: time="2024-02-12T20:21:43.912401333Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3213 runtime=io.containerd.runc.v2\n" Feb 12 20:21:44.233525 kubelet[1393]: I0212 20:21:44.233492 1393 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f0f1f742-c02a-4180-9ac8-124ecae06c54 path="/var/lib/kubelet/pods/f0f1f742-c02a-4180-9ac8-124ecae06c54/volumes" Feb 12 20:21:44.241358 kubelet[1393]: E0212 20:21:44.241329 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:44.254412 env[1125]: time="2024-02-12T20:21:44.254362612Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:44.255815 env[1125]: time="2024-02-12T20:21:44.255772764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:44.257326 env[1125]: time="2024-02-12T20:21:44.257295999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:21:44.257761 env[1125]: time="2024-02-12T20:21:44.257728963Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:21:44.259181 env[1125]: time="2024-02-12T20:21:44.259156649Z" level=info msg="CreateContainer within sandbox \"d429a3b821980b4e48065dbc44c0b3b853553daaddb6faf44aab5573a5834310\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:21:44.269881 env[1125]: time="2024-02-12T20:21:44.269827809Z" level=info msg="CreateContainer within sandbox \"d429a3b821980b4e48065dbc44c0b3b853553daaddb6faf44aab5573a5834310\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ee58a8bd642df360b231a16ec04ef6c93ccff03003dfd0292e536f765c01ced5\"" Feb 12 20:21:44.270310 env[1125]: time="2024-02-12T20:21:44.270270692Z" level=info msg="StartContainer for \"ee58a8bd642df360b231a16ec04ef6c93ccff03003dfd0292e536f765c01ced5\"" Feb 12 20:21:44.285566 systemd[1]: Started cri-containerd-ee58a8bd642df360b231a16ec04ef6c93ccff03003dfd0292e536f765c01ced5.scope. Feb 12 20:21:44.403538 env[1125]: time="2024-02-12T20:21:44.403456357Z" level=info msg="StartContainer for \"ee58a8bd642df360b231a16ec04ef6c93ccff03003dfd0292e536f765c01ced5\" returns successfully" Feb 12 20:21:44.407461 kubelet[1393]: E0212 20:21:44.407424 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:44.408975 env[1125]: time="2024-02-12T20:21:44.408930553Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:21:44.422079 env[1125]: time="2024-02-12T20:21:44.422018508Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799\"" Feb 12 20:21:44.422559 env[1125]: time="2024-02-12T20:21:44.422528768Z" level=info msg="StartContainer for \"c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799\"" Feb 12 20:21:44.435290 systemd[1]: Started cri-containerd-c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799.scope. Feb 12 20:21:44.457993 env[1125]: time="2024-02-12T20:21:44.457942514Z" level=info msg="StartContainer for \"c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799\" returns successfully" Feb 12 20:21:44.461169 systemd[1]: cri-containerd-c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799.scope: Deactivated successfully. Feb 12 20:21:44.479291 env[1125]: time="2024-02-12T20:21:44.479242674Z" level=info msg="shim disconnected" id=c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799 Feb 12 20:21:44.479291 env[1125]: time="2024-02-12T20:21:44.479292297Z" level=warning msg="cleaning up after shim disconnected" id=c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799 namespace=k8s.io Feb 12 20:21:44.479488 env[1125]: time="2024-02-12T20:21:44.479302287Z" level=info msg="cleaning up dead shim" Feb 12 20:21:44.484726 env[1125]: time="2024-02-12T20:21:44.484624257Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3314 runtime=io.containerd.runc.v2\n" Feb 12 20:21:44.808596 kubelet[1393]: W0212 20:21:44.808458 1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f1f742_c02a_4180_9ac8_124ecae06c54.slice/cri-containerd-a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a.scope WatchSource:0}: container "a04560102ad839646a4b61ab3753ddc4f442e8570632ef901999dd36c70b887a" in namespace "k8s.io": not found Feb 12 20:21:45.220584 kubelet[1393]: E0212 20:21:45.220552 1393 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:21:45.242007 kubelet[1393]: E0212 20:21:45.241970 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:45.412232 kubelet[1393]: E0212 20:21:45.412206 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:45.412421 kubelet[1393]: E0212 20:21:45.412312 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:45.414155 env[1125]: time="2024-02-12T20:21:45.414112622Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:21:46.006596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34778883.mount: Deactivated successfully. Feb 12 20:21:46.143169 env[1125]: time="2024-02-12T20:21:46.143105572Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d\"" Feb 12 20:21:46.143642 env[1125]: time="2024-02-12T20:21:46.143621793Z" level=info msg="StartContainer for \"c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d\"" Feb 12 20:21:46.147037 kubelet[1393]: I0212 20:21:46.147009 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-v4ljc" podStartSLOduration=2.551610717 podCreationTimestamp="2024-02-12 20:21:41 +0000 UTC" firstStartedPulling="2024-02-12 20:21:41.662629602 +0000 UTC m=+61.723482632" lastFinishedPulling="2024-02-12 20:21:44.257983533 +0000 UTC m=+64.318836563" observedRunningTime="2024-02-12 20:21:45.992801061 +0000 UTC m=+66.053654111" watchObservedRunningTime="2024-02-12 20:21:46.146964648 +0000 UTC m=+66.207817678" Feb 12 20:21:46.159647 systemd[1]: Started cri-containerd-c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d.scope. Feb 12 20:21:46.181696 systemd[1]: cri-containerd-c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d.scope: Deactivated successfully. Feb 12 20:21:46.231826 env[1125]: time="2024-02-12T20:21:46.231739861Z" level=info msg="StartContainer for \"c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d\" returns successfully" Feb 12 20:21:46.242418 kubelet[1393]: E0212 20:21:46.242263 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:46.246175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d-rootfs.mount: Deactivated successfully. Feb 12 20:21:46.277891 env[1125]: time="2024-02-12T20:21:46.277779984Z" level=info msg="shim disconnected" id=c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d Feb 12 20:21:46.277891 env[1125]: time="2024-02-12T20:21:46.277838173Z" level=warning msg="cleaning up after shim disconnected" id=c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d namespace=k8s.io Feb 12 20:21:46.277891 env[1125]: time="2024-02-12T20:21:46.277878750Z" level=info msg="cleaning up dead shim" Feb 12 20:21:46.283953 env[1125]: time="2024-02-12T20:21:46.283911122Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3371 runtime=io.containerd.runc.v2\n" Feb 12 20:21:46.415980 kubelet[1393]: E0212 20:21:46.415955 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:46.416211 kubelet[1393]: E0212 20:21:46.416003 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:46.417673 env[1125]: time="2024-02-12T20:21:46.417631719Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:21:46.597317 env[1125]: time="2024-02-12T20:21:46.597206894Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740\"" Feb 12 20:21:46.597827 env[1125]: time="2024-02-12T20:21:46.597804256Z" level=info msg="StartContainer for \"98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740\"" Feb 12 20:21:46.609793 systemd[1]: Started cri-containerd-98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740.scope. Feb 12 20:21:46.629542 systemd[1]: cri-containerd-98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740.scope: Deactivated successfully. Feb 12 20:21:46.632070 env[1125]: time="2024-02-12T20:21:46.632030408Z" level=info msg="StartContainer for \"98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740\" returns successfully" Feb 12 20:21:46.652141 env[1125]: time="2024-02-12T20:21:46.652069305Z" level=info msg="shim disconnected" id=98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740 Feb 12 20:21:46.652141 env[1125]: time="2024-02-12T20:21:46.652123326Z" level=warning msg="cleaning up after shim disconnected" id=98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740 namespace=k8s.io Feb 12 20:21:46.652141 env[1125]: time="2024-02-12T20:21:46.652134797Z" level=info msg="cleaning up dead shim" Feb 12 20:21:46.658236 env[1125]: time="2024-02-12T20:21:46.658190243Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3425 runtime=io.containerd.runc.v2\n" Feb 12 20:21:47.242409 kubelet[1393]: E0212 20:21:47.242353 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:47.419303 kubelet[1393]: E0212 20:21:47.419279 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:47.421072 env[1125]: time="2024-02-12T20:21:47.421034619Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:21:47.670487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418357585.mount: Deactivated successfully. Feb 12 20:21:47.897864 env[1125]: time="2024-02-12T20:21:47.897799127Z" level=info msg="CreateContainer within sandbox \"e2998849d6ef4c40c7538193d446d8bc7aa9c53493ed4830963c59b23f68b89d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed\"" Feb 12 20:21:47.898292 env[1125]: time="2024-02-12T20:21:47.898267898Z" level=info msg="StartContainer for \"d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed\"" Feb 12 20:21:47.912498 systemd[1]: Started cri-containerd-d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed.scope. Feb 12 20:21:47.916537 kubelet[1393]: W0212 20:21:47.916466 1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24311e9_2d79_4955_93b4_5d6be9af7aa5.slice/cri-containerd-bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9.scope WatchSource:0}: task bb895a9f4b90bfd994dbcfb098f50d5ed77970b10036c892342fbe19082d92b9 not found: not found Feb 12 20:21:48.003509 env[1125]: time="2024-02-12T20:21:48.003439981Z" level=info msg="StartContainer for \"d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed\" returns successfully" Feb 12 20:21:48.016748 systemd[1]: run-containerd-runc-k8s.io-d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed-runc.BgrblE.mount: Deactivated successfully. Feb 12 20:21:48.211871 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:21:48.243098 kubelet[1393]: E0212 20:21:48.243030 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:48.423685 kubelet[1393]: E0212 20:21:48.423571 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:48.434253 kubelet[1393]: I0212 20:21:48.434217 1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cmj4t" podStartSLOduration=5.434188611 podCreationTimestamp="2024-02-12 20:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:21:48.433931558 +0000 UTC m=+68.494784608" watchObservedRunningTime="2024-02-12 20:21:48.434188611 +0000 UTC m=+68.495041641" Feb 12 20:21:49.243442 kubelet[1393]: E0212 20:21:49.243376 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:49.731336 kubelet[1393]: E0212 20:21:49.731286 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:50.243684 kubelet[1393]: E0212 20:21:50.243649 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:50.716357 systemd-networkd[1023]: lxc_health: Link UP Feb 12 20:21:50.718024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:21:50.717610 systemd-networkd[1023]: lxc_health: Gained carrier Feb 12 20:21:51.026419 kubelet[1393]: W0212 20:21:51.024060 1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24311e9_2d79_4955_93b4_5d6be9af7aa5.slice/cri-containerd-c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799.scope WatchSource:0}: task c84b9df93fbeb0d8a62137c77a086ecb6c99a839ff1f0ba258448a80125c0799 not found: not found Feb 12 20:21:51.244352 kubelet[1393]: E0212 20:21:51.244273 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:51.732015 kubelet[1393]: E0212 20:21:51.731984 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:52.244984 kubelet[1393]: E0212 20:21:52.244955 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:52.429409 kubelet[1393]: E0212 20:21:52.429379 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:52.554993 systemd-networkd[1023]: lxc_health: Gained IPv6LL Feb 12 20:21:53.245139 kubelet[1393]: E0212 20:21:53.245092 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:53.430488 kubelet[1393]: E0212 20:21:53.430462 1393 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:54.133039 kubelet[1393]: W0212 20:21:54.132993 1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24311e9_2d79_4955_93b4_5d6be9af7aa5.slice/cri-containerd-c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d.scope WatchSource:0}: task c068ea58255deb7eb1ffdde78483708880374961839d5d6044be74273bba2a1d not found: not found Feb 12 20:21:54.246015 kubelet[1393]: E0212 20:21:54.245985 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:54.482798 systemd[1]: run-containerd-runc-k8s.io-d7147a80f162a635b9735ec4b5726d5561508997fbbfc1fec8818d5210d1e9ed-runc.Jy1ffX.mount: Deactivated successfully. Feb 12 20:21:55.246517 kubelet[1393]: E0212 20:21:55.246479 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:56.247321 kubelet[1393]: E0212 20:21:56.247282 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:57.238607 kubelet[1393]: W0212 20:21:57.238566 1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda24311e9_2d79_4955_93b4_5d6be9af7aa5.slice/cri-containerd-98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740.scope WatchSource:0}: task 98091d93e39f8f626aa0fccb41d943f1107bed95c12739ca375a134455033740 not found: not found Feb 12 20:21:57.247621 kubelet[1393]: E0212 20:21:57.247596 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:58.247982 kubelet[1393]: E0212 20:21:58.247942 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:21:59.248888 kubelet[1393]: E0212 20:21:59.248836 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:00.158637 kubelet[1393]: E0212 20:22:00.158575 1393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:22:00.249798 kubelet[1393]: E0212 20:22:00.249692 1393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"