Feb 9 18:49:42.785625 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:49:42.785644 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:49:42.785651 kernel: BIOS-provided physical RAM map: Feb 9 18:49:42.785657 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:49:42.785662 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:49:42.785667 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:49:42.785674 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 9 18:49:42.785679 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 9 18:49:42.785686 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 18:49:42.785691 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:49:42.785697 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 9 18:49:42.785702 kernel: NX (Execute Disable) protection: active Feb 9 18:49:42.785707 kernel: SMBIOS 2.8 present. Feb 9 18:49:42.785713 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 9 18:49:42.785721 kernel: Hypervisor detected: KVM Feb 9 18:49:42.785727 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:49:42.785733 kernel: kvm-clock: cpu 0, msr 25faa001, primary cpu clock Feb 9 18:49:42.785739 kernel: kvm-clock: using sched offset of 2125222191 cycles Feb 9 18:49:42.785745 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:49:42.785752 kernel: tsc: Detected 2794.750 MHz processor Feb 9 18:49:42.785758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:49:42.785764 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:49:42.785770 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 9 18:49:42.785777 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:49:42.785783 kernel: Using GB pages for direct mapping Feb 9 18:49:42.785789 kernel: ACPI: Early table checksum verification disabled Feb 9 18:49:42.785795 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 9 18:49:42.785801 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785807 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785813 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785819 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 9 18:49:42.785825 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785832 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785838 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:49:42.785844 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 9 18:49:42.785861 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 9 18:49:42.785883 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 9 18:49:42.785900 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 9 18:49:42.785908 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 9 18:49:42.785915 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 9 18:49:42.785925 kernel: No NUMA configuration found Feb 9 18:49:42.785931 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 9 18:49:42.785944 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 9 18:49:42.785953 kernel: Zone ranges: Feb 9 18:49:42.785961 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:49:42.785969 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 9 18:49:42.785977 kernel: Normal empty Feb 9 18:49:42.785984 kernel: Movable zone start for each node Feb 9 18:49:42.785990 kernel: Early memory node ranges Feb 9 18:49:42.785996 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:49:42.786002 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 9 18:49:42.786009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 9 18:49:42.786015 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:49:42.786021 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:49:42.786028 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 9 18:49:42.786035 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 18:49:42.786041 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:49:42.786048 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 18:49:42.786054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 18:49:42.786061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:49:42.786067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:49:42.786073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:49:42.786080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:49:42.786093 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:49:42.786100 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:49:42.786107 kernel: TSC deadline timer available Feb 9 18:49:42.786113 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 18:49:42.786119 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 18:49:42.786125 kernel: kvm-guest: setup PV sched yield Feb 9 18:49:42.786132 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 9 18:49:42.786138 kernel: Booting paravirtualized kernel on KVM Feb 9 18:49:42.786145 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:49:42.786152 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 18:49:42.786159 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 18:49:42.786166 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 18:49:42.786172 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 18:49:42.786178 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 18:49:42.786184 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 9 18:49:42.786191 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:49:42.786198 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:49:42.786217 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 9 18:49:42.786224 kernel: Policy zone: DMA32 Feb 9 18:49:42.786231 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:49:42.786240 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:49:42.786246 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:49:42.786253 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:49:42.786259 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:49:42.786266 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 9 18:49:42.786272 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:49:42.786279 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:49:42.786285 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:49:42.786293 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:49:42.786300 kernel: rcu: RCU event tracing is enabled. Feb 9 18:49:42.786306 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:49:42.786313 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:49:42.786319 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:49:42.786326 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:49:42.786332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:49:42.786339 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 18:49:42.786345 kernel: random: crng init done Feb 9 18:49:42.786352 kernel: Console: colour VGA+ 80x25 Feb 9 18:49:42.786359 kernel: printk: console [ttyS0] enabled Feb 9 18:49:42.786365 kernel: ACPI: Core revision 20210730 Feb 9 18:49:42.786372 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 18:49:42.786378 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:49:42.786384 kernel: x2apic enabled Feb 9 18:49:42.786391 kernel: Switched APIC routing to physical x2apic. Feb 9 18:49:42.786397 kernel: kvm-guest: setup PV IPIs Feb 9 18:49:42.786403 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 18:49:42.786411 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 18:49:42.786417 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 18:49:42.786424 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 18:49:42.786430 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 18:49:42.786436 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 18:49:42.786443 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:49:42.786449 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:49:42.786456 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:49:42.786462 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:49:42.786474 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 18:49:42.786481 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 18:49:42.786487 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 18:49:42.786495 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 18:49:42.786502 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:49:42.786509 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:49:42.786515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:49:42.786522 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:49:42.786529 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 18:49:42.786537 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:49:42.786544 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:49:42.786550 kernel: LSM: Security Framework initializing Feb 9 18:49:42.786557 kernel: SELinux: Initializing. Feb 9 18:49:42.786564 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:49:42.786571 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:49:42.786578 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 18:49:42.786585 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 18:49:42.786592 kernel: ... version: 0 Feb 9 18:49:42.786599 kernel: ... bit width: 48 Feb 9 18:49:42.786605 kernel: ... generic registers: 6 Feb 9 18:49:42.786612 kernel: ... value mask: 0000ffffffffffff Feb 9 18:49:42.786619 kernel: ... max period: 00007fffffffffff Feb 9 18:49:42.786625 kernel: ... fixed-purpose events: 0 Feb 9 18:49:42.786632 kernel: ... event mask: 000000000000003f Feb 9 18:49:42.786639 kernel: signal: max sigframe size: 1776 Feb 9 18:49:42.786647 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:49:42.786654 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:49:42.786670 kernel: x86: Booting SMP configuration: Feb 9 18:49:42.786691 kernel: .... node #0, CPUs: #1 Feb 9 18:49:42.786700 kernel: kvm-clock: cpu 1, msr 25faa041, secondary cpu clock Feb 9 18:49:42.786708 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 18:49:42.786714 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 9 18:49:42.786726 kernel: #2 Feb 9 18:49:42.786735 kernel: kvm-clock: cpu 2, msr 25faa081, secondary cpu clock Feb 9 18:49:42.786744 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 18:49:42.786755 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 9 18:49:42.786761 kernel: #3 Feb 9 18:49:42.786768 kernel: kvm-clock: cpu 3, msr 25faa0c1, secondary cpu clock Feb 9 18:49:42.786774 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 18:49:42.786781 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 9 18:49:42.786788 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:49:42.786794 kernel: smpboot: Max logical packages: 1 Feb 9 18:49:42.786801 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 18:49:42.786808 kernel: devtmpfs: initialized Feb 9 18:49:42.786815 kernel: x86/mm: Memory block size: 128MB Feb 9 18:49:42.786822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:49:42.786829 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:49:42.786836 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:49:42.786843 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:49:42.786849 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:49:42.786856 kernel: audit: type=2000 audit(1707504582.878:1): state=initialized audit_enabled=0 res=1 Feb 9 18:49:42.786863 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:49:42.786869 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:49:42.786877 kernel: cpuidle: using governor menu Feb 9 18:49:42.786884 kernel: ACPI: bus type PCI registered Feb 9 18:49:42.786890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:49:42.786897 kernel: dca service started, version 1.12.1 Feb 9 18:49:42.786904 kernel: PCI: Using configuration type 1 for base access Feb 9 18:49:42.786910 kernel: PCI: Using configuration type 1 for extended access Feb 9 18:49:42.786917 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:49:42.786924 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:49:42.786931 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:49:42.786939 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:49:42.786945 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:49:42.786952 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:49:42.786959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:49:42.786965 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:49:42.786972 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:49:42.786979 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:49:42.786985 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:49:42.786992 kernel: ACPI: Interpreter enabled Feb 9 18:49:42.787000 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 18:49:42.787006 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:49:42.787013 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:49:42.787020 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 18:49:42.787027 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:49:42.787143 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:49:42.787154 kernel: acpiphp: Slot [3] registered Feb 9 18:49:42.787161 kernel: acpiphp: Slot [4] registered Feb 9 18:49:42.787170 kernel: acpiphp: Slot [5] registered Feb 9 18:49:42.787176 kernel: acpiphp: Slot [6] registered Feb 9 18:49:42.787183 kernel: acpiphp: Slot [7] registered Feb 9 18:49:42.787189 kernel: acpiphp: Slot [8] registered Feb 9 18:49:42.787196 kernel: acpiphp: Slot [9] registered Feb 9 18:49:42.787214 kernel: acpiphp: Slot [10] registered Feb 9 18:49:42.787223 kernel: acpiphp: Slot [11] registered Feb 9 18:49:42.787230 kernel: acpiphp: Slot [12] registered Feb 9 18:49:42.787238 kernel: acpiphp: Slot [13] registered Feb 9 18:49:42.787246 kernel: acpiphp: Slot [14] registered Feb 9 18:49:42.787257 kernel: acpiphp: Slot [15] registered Feb 9 18:49:42.787266 kernel: acpiphp: Slot [16] registered Feb 9 18:49:42.787274 kernel: acpiphp: Slot [17] registered Feb 9 18:49:42.787282 kernel: acpiphp: Slot [18] registered Feb 9 18:49:42.787291 kernel: acpiphp: Slot [19] registered Feb 9 18:49:42.787300 kernel: acpiphp: Slot [20] registered Feb 9 18:49:42.787309 kernel: acpiphp: Slot [21] registered Feb 9 18:49:42.787318 kernel: acpiphp: Slot [22] registered Feb 9 18:49:42.787326 kernel: acpiphp: Slot [23] registered Feb 9 18:49:42.787337 kernel: acpiphp: Slot [24] registered Feb 9 18:49:42.787347 kernel: acpiphp: Slot [25] registered Feb 9 18:49:42.787356 kernel: acpiphp: Slot [26] registered Feb 9 18:49:42.787365 kernel: acpiphp: Slot [27] registered Feb 9 18:49:42.787375 kernel: acpiphp: Slot [28] registered Feb 9 18:49:42.787385 kernel: acpiphp: Slot [29] registered Feb 9 18:49:42.787394 kernel: acpiphp: Slot [30] registered Feb 9 18:49:42.787403 kernel: acpiphp: Slot [31] registered Feb 9 18:49:42.787412 kernel: PCI host bridge to bus 0000:00 Feb 9 18:49:42.787522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:49:42.787619 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:49:42.787715 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:49:42.787802 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 18:49:42.787890 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 18:49:42.787980 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:49:42.788109 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:49:42.788241 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:49:42.788362 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 18:49:42.788464 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 18:49:42.788569 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 18:49:42.788670 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 18:49:42.792744 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 18:49:42.792849 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 18:49:42.792960 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 18:49:42.793059 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 18:49:42.793168 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 18:49:42.793288 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 18:49:42.793388 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 9 18:49:42.793485 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 9 18:49:42.793587 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 9 18:49:42.793682 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:49:42.793791 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:49:42.793889 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 18:49:42.793990 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 9 18:49:42.794098 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 9 18:49:42.794220 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 18:49:42.794327 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 18:49:42.794427 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 9 18:49:42.794524 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 9 18:49:42.794631 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 18:49:42.794727 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 18:49:42.794825 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 9 18:49:42.794921 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 9 18:49:42.795022 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 9 18:49:42.795036 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:49:42.795045 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:49:42.795055 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:49:42.795065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:49:42.795074 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:49:42.795093 kernel: iommu: Default domain type: Translated Feb 9 18:49:42.795104 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:49:42.795263 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 18:49:42.795373 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:49:42.795470 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 18:49:42.795483 kernel: vgaarb: loaded Feb 9 18:49:42.795493 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:49:42.795502 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:49:42.795512 kernel: PTP clock support registered Feb 9 18:49:42.795522 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:49:42.795531 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:49:42.795544 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:49:42.795554 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 9 18:49:42.795563 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 18:49:42.795573 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 18:49:42.795583 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:49:42.795593 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:49:42.795603 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:49:42.795613 kernel: pnp: PnP ACPI init Feb 9 18:49:42.795719 kernel: pnp 00:02: [dma 2] Feb 9 18:49:42.795737 kernel: pnp: PnP ACPI: found 6 devices Feb 9 18:49:42.795748 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:49:42.795758 kernel: NET: Registered PF_INET protocol family Feb 9 18:49:42.795768 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:49:42.795778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:49:42.795788 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:49:42.795798 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:49:42.795808 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:49:42.795820 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:49:42.795830 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:49:42.795840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:49:42.795850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:49:42.795859 kernel: NET: Registered PF_XDP protocol family Feb 9 18:49:42.795947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:49:42.796033 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:49:42.796126 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:49:42.796233 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 18:49:42.796325 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 18:49:42.796424 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 18:49:42.796519 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:49:42.796617 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:49:42.796631 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:49:42.796642 kernel: Initialise system trusted keyrings Feb 9 18:49:42.796652 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:49:42.796661 kernel: Key type asymmetric registered Feb 9 18:49:42.796674 kernel: Asymmetric key parser 'x509' registered Feb 9 18:49:42.796683 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:49:42.796692 kernel: io scheduler mq-deadline registered Feb 9 18:49:42.796702 kernel: io scheduler kyber registered Feb 9 18:49:42.796712 kernel: io scheduler bfq registered Feb 9 18:49:42.796721 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:49:42.796732 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 18:49:42.796742 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 18:49:42.796752 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 18:49:42.796764 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:49:42.796774 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:49:42.796784 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:49:42.796794 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:49:42.796804 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:49:42.796814 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:49:42.796912 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 18:49:42.797003 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 18:49:42.797101 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T18:49:42 UTC (1707504582) Feb 9 18:49:42.797191 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 18:49:42.797216 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:49:42.797226 kernel: Segment Routing with IPv6 Feb 9 18:49:42.797236 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:49:42.797246 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:49:42.797256 kernel: Key type dns_resolver registered Feb 9 18:49:42.797266 kernel: IPI shorthand broadcast: enabled Feb 9 18:49:42.797275 kernel: sched_clock: Marking stable (361289087, 69620048)->(436024035, -5114900) Feb 9 18:49:42.797287 kernel: registered taskstats version 1 Feb 9 18:49:42.797297 kernel: Loading compiled-in X.509 certificates Feb 9 18:49:42.797307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:49:42.797316 kernel: Key type .fscrypt registered Feb 9 18:49:42.797326 kernel: Key type fscrypt-provisioning registered Feb 9 18:49:42.797336 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:49:42.797346 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:49:42.797356 kernel: ima: No architecture policies found Feb 9 18:49:42.797368 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:49:42.797378 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:49:42.797388 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:49:42.797398 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:49:42.797408 kernel: Run /init as init process Feb 9 18:49:42.797417 kernel: with arguments: Feb 9 18:49:42.797427 kernel: /init Feb 9 18:49:42.797437 kernel: with environment: Feb 9 18:49:42.797457 kernel: HOME=/ Feb 9 18:49:42.797468 kernel: TERM=linux Feb 9 18:49:42.797479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:49:42.797492 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:49:42.797505 systemd[1]: Detected virtualization kvm. Feb 9 18:49:42.797516 systemd[1]: Detected architecture x86-64. Feb 9 18:49:42.797527 systemd[1]: Running in initrd. Feb 9 18:49:42.797538 systemd[1]: No hostname configured, using default hostname. Feb 9 18:49:42.797548 systemd[1]: Hostname set to . Feb 9 18:49:42.797562 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:49:42.797572 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:49:42.797583 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:49:42.797594 systemd[1]: Reached target cryptsetup.target. Feb 9 18:49:42.797604 systemd[1]: Reached target paths.target. Feb 9 18:49:42.797616 systemd[1]: Reached target slices.target. Feb 9 18:49:42.797627 systemd[1]: Reached target swap.target. Feb 9 18:49:42.797637 systemd[1]: Reached target timers.target. Feb 9 18:49:42.797650 systemd[1]: Listening on iscsid.socket. Feb 9 18:49:42.797661 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:49:42.797671 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:49:42.797682 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:49:42.797692 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:49:42.797703 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:49:42.797714 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:49:42.797725 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:49:42.797738 systemd[1]: Reached target sockets.target. Feb 9 18:49:42.797749 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:49:42.797760 systemd[1]: Finished network-cleanup.service. Feb 9 18:49:42.797771 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:49:42.797782 systemd[1]: Starting systemd-journald.service... Feb 9 18:49:42.797793 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:49:42.797806 systemd[1]: Starting systemd-resolved.service... Feb 9 18:49:42.797817 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:49:42.797828 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:49:42.797839 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:49:42.797849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:49:42.797862 systemd-journald[198]: Journal started Feb 9 18:49:42.797914 systemd-journald[198]: Runtime Journal (/run/log/journal/1e2fdcab318947ee8d1b9219f62d11ab) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:49:42.784790 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 18:49:42.809361 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:49:42.809382 systemd[1]: Started systemd-journald.service. Feb 9 18:49:42.810174 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:49:42.815981 kernel: Bridge firewalling registered Feb 9 18:49:42.815998 kernel: audit: type=1130 audit(1707504582.809:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.816010 kernel: audit: type=1130 audit(1707504582.812:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.812749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:49:42.815975 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 18:49:42.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.817320 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:49:42.821219 kernel: audit: type=1130 audit(1707504582.816:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.825052 systemd-resolved[200]: Positive Trust Anchors: Feb 9 18:49:42.825063 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:49:42.825096 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:49:42.827542 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 18:49:42.828373 systemd[1]: Started systemd-resolved.service. Feb 9 18:49:42.831436 kernel: audit: type=1130 audit(1707504582.828:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.828628 systemd[1]: Reached target nss-lookup.target. Feb 9 18:49:42.834709 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:49:42.838703 kernel: audit: type=1130 audit(1707504582.835:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.835843 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:49:42.842950 dracut-cmdline[215]: dracut-dracut-053 Feb 9 18:49:42.844398 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:49:42.852227 kernel: SCSI subsystem initialized Feb 9 18:49:42.862219 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:49:42.862242 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:49:42.863713 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:49:42.866672 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 18:49:42.868151 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:49:42.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.869509 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:49:42.873096 kernel: audit: type=1130 audit(1707504582.868:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.876725 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:49:42.879913 kernel: audit: type=1130 audit(1707504582.876:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.897222 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:49:42.908224 kernel: iscsi: registered transport (tcp) Feb 9 18:49:42.928229 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:49:42.928253 kernel: QLogic iSCSI HBA Driver Feb 9 18:49:42.953233 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:49:42.957450 kernel: audit: type=1130 audit(1707504582.953:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:42.954678 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:49:42.998273 kernel: raid6: avx2x4 gen() 30647 MB/s Feb 9 18:49:43.015231 kernel: raid6: avx2x4 xor() 8333 MB/s Feb 9 18:49:43.032234 kernel: raid6: avx2x2 gen() 32518 MB/s Feb 9 18:49:43.049231 kernel: raid6: avx2x2 xor() 19178 MB/s Feb 9 18:49:43.066225 kernel: raid6: avx2x1 gen() 26562 MB/s Feb 9 18:49:43.083233 kernel: raid6: avx2x1 xor() 15325 MB/s Feb 9 18:49:43.100231 kernel: raid6: sse2x4 gen() 14833 MB/s Feb 9 18:49:43.117223 kernel: raid6: sse2x4 xor() 7513 MB/s Feb 9 18:49:43.134228 kernel: raid6: sse2x2 gen() 16366 MB/s Feb 9 18:49:43.153227 kernel: raid6: sse2x2 xor() 9812 MB/s Feb 9 18:49:43.170220 kernel: raid6: sse2x1 gen() 12421 MB/s Feb 9 18:49:43.187688 kernel: raid6: sse2x1 xor() 7788 MB/s Feb 9 18:49:43.187703 kernel: raid6: using algorithm avx2x2 gen() 32518 MB/s Feb 9 18:49:43.187722 kernel: raid6: .... xor() 19178 MB/s, rmw enabled Feb 9 18:49:43.187731 kernel: raid6: using avx2x2 recovery algorithm Feb 9 18:49:43.201225 kernel: xor: automatically using best checksumming function avx Feb 9 18:49:43.292227 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:49:43.299375 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:49:43.302404 kernel: audit: type=1130 audit(1707504583.299:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:43.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:43.302000 audit: BPF prog-id=7 op=LOAD Feb 9 18:49:43.302000 audit: BPF prog-id=8 op=LOAD Feb 9 18:49:43.302676 systemd[1]: Starting systemd-udevd.service... Feb 9 18:49:43.314003 systemd-udevd[401]: Using default interface naming scheme 'v252'. Feb 9 18:49:43.317733 systemd[1]: Started systemd-udevd.service. Feb 9 18:49:43.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:43.318821 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:49:43.326882 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 9 18:49:43.346675 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:49:43.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:43.348511 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:49:43.380601 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:49:43.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:43.412251 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:49:43.412312 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:49:43.416334 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:49:43.416360 kernel: GPT:9289727 != 19775487 Feb 9 18:49:43.416369 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:49:43.416378 kernel: GPT:9289727 != 19775487 Feb 9 18:49:43.416387 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:49:43.416395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:49:43.421227 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:49:43.421256 kernel: AES CTR mode by8 optimization enabled Feb 9 18:49:43.438754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:49:43.469467 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Feb 9 18:49:43.469490 kernel: libata version 3.00 loaded. Feb 9 18:49:43.469500 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 18:49:43.469624 kernel: scsi host0: ata_piix Feb 9 18:49:43.469713 kernel: scsi host1: ata_piix Feb 9 18:49:43.469808 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 18:49:43.469818 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 18:49:43.470129 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:49:43.479361 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:49:43.483559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:49:43.488421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:49:43.490253 systemd[1]: Starting disk-uuid.service... Feb 9 18:49:43.497237 disk-uuid[537]: Primary Header is updated. Feb 9 18:49:43.497237 disk-uuid[537]: Secondary Entries is updated. Feb 9 18:49:43.497237 disk-uuid[537]: Secondary Header is updated. Feb 9 18:49:43.500236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:49:43.503219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:49:43.612222 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 18:49:43.612271 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 18:49:43.639226 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 18:49:43.639375 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:49:43.656225 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 18:49:44.518227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:49:44.518725 disk-uuid[539]: The operation has completed successfully. Feb 9 18:49:44.542427 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:49:44.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.542531 systemd[1]: Finished disk-uuid.service. Feb 9 18:49:44.548798 systemd[1]: Starting verity-setup.service... Feb 9 18:49:44.561227 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 18:49:44.578342 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:49:44.580129 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:49:44.581936 systemd[1]: Finished verity-setup.service. Feb 9 18:49:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.637222 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:49:44.637622 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:49:44.637831 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:49:44.639577 systemd[1]: Starting ignition-setup.service... Feb 9 18:49:44.640328 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:49:44.649625 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:49:44.649684 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:49:44.649697 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:49:44.656003 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:49:44.662794 systemd[1]: Finished ignition-setup.service. Feb 9 18:49:44.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.663703 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:49:44.696018 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:49:44.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.697000 audit: BPF prog-id=9 op=LOAD Feb 9 18:49:44.697775 ignition[647]: Ignition 2.14.0 Feb 9 18:49:44.697789 ignition[647]: Stage: fetch-offline Feb 9 18:49:44.697915 systemd[1]: Starting systemd-networkd.service... Feb 9 18:49:44.697844 ignition[647]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:44.697855 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:44.699370 ignition[647]: parsed url from cmdline: "" Feb 9 18:49:44.699375 ignition[647]: no config URL provided Feb 9 18:49:44.699382 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:49:44.699942 ignition[647]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:49:44.699965 ignition[647]: op(1): [started] loading QEMU firmware config module Feb 9 18:49:44.699971 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:49:44.707285 ignition[647]: op(1): [finished] loading QEMU firmware config module Feb 9 18:49:44.717722 ignition[647]: parsing config with SHA512: bf9996dded37bea58ce6fafeac32e2f9c37bbe906a5a39d542c84493a93583337f4bcf8ed1958744cf761bee6676ea40f4266d2f21a1a366f0a225f363d767cc Feb 9 18:49:44.717782 systemd-networkd[717]: lo: Link UP Feb 9 18:49:44.717786 systemd-networkd[717]: lo: Gained carrier Feb 9 18:49:44.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.718384 systemd-networkd[717]: Enumeration completed Feb 9 18:49:44.718475 systemd[1]: Started systemd-networkd.service. Feb 9 18:49:44.718761 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:49:44.719545 systemd[1]: Reached target network.target. Feb 9 18:49:44.720404 systemd-networkd[717]: eth0: Link UP Feb 9 18:49:44.720407 systemd-networkd[717]: eth0: Gained carrier Feb 9 18:49:44.721582 systemd[1]: Starting iscsiuio.service... Feb 9 18:49:44.726997 systemd[1]: Started iscsiuio.service. Feb 9 18:49:44.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.728153 systemd[1]: Starting iscsid.service... Feb 9 18:49:44.730615 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:49:44.730615 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:49:44.730615 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:49:44.730615 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:49:44.730615 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:49:44.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.739515 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:49:44.731597 systemd[1]: Started iscsid.service. Feb 9 18:49:44.734284 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:49:44.736397 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:49:44.748226 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:49:44.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.752454 ignition[647]: fetch-offline: fetch-offline passed Feb 9 18:49:44.749654 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:49:44.752507 ignition[647]: Ignition finished successfully Feb 9 18:49:44.751536 unknown[647]: fetched base config from "system" Feb 9 18:49:44.751542 unknown[647]: fetched user config from "qemu" Feb 9 18:49:44.753731 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:49:44.756627 systemd[1]: Reached target remote-fs.target. Feb 9 18:49:44.758324 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:49:44.759843 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:49:44.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.761616 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:49:44.763684 systemd[1]: Starting ignition-kargs.service... Feb 9 18:49:44.767164 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:49:44.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.772335 ignition[735]: Ignition 2.14.0 Feb 9 18:49:44.772343 ignition[735]: Stage: kargs Feb 9 18:49:44.772429 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:44.772437 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:44.773305 ignition[735]: kargs: kargs passed Feb 9 18:49:44.773337 ignition[735]: Ignition finished successfully Feb 9 18:49:44.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.775090 systemd[1]: Finished ignition-kargs.service. Feb 9 18:49:44.776298 systemd[1]: Starting ignition-disks.service... Feb 9 18:49:44.784480 ignition[745]: Ignition 2.14.0 Feb 9 18:49:44.784490 ignition[745]: Stage: disks Feb 9 18:49:44.784593 ignition[745]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:44.784604 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:44.785728 ignition[745]: disks: disks passed Feb 9 18:49:44.785767 ignition[745]: Ignition finished successfully Feb 9 18:49:44.788139 systemd[1]: Finished ignition-disks.service. Feb 9 18:49:44.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.788330 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:49:44.789730 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:49:44.790788 systemd[1]: Reached target local-fs.target. Feb 9 18:49:44.791300 systemd[1]: Reached target sysinit.target. Feb 9 18:49:44.791509 systemd[1]: Reached target basic.target. Feb 9 18:49:44.793922 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:49:44.802865 systemd-fsck[753]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:49:44.807856 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:49:44.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.808673 systemd[1]: Mounting sysroot.mount... Feb 9 18:49:44.815236 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:49:44.815467 systemd[1]: Mounted sysroot.mount. Feb 9 18:49:44.816438 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:49:44.818233 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:49:44.819795 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:49:44.820832 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:49:44.820859 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:49:44.823652 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:49:44.825158 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:49:44.828844 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:49:44.832064 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:49:44.835016 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:49:44.837715 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:49:44.857018 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:49:44.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.857716 systemd[1]: Starting ignition-mount.service... Feb 9 18:49:44.859076 systemd[1]: Starting sysroot-boot.service... Feb 9 18:49:44.862047 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:49:44.868098 ignition[805]: INFO : Ignition 2.14.0 Feb 9 18:49:44.868862 ignition[805]: INFO : Stage: mount Feb 9 18:49:44.869505 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:44.870300 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:44.871995 ignition[805]: INFO : mount: mount passed Feb 9 18:49:44.872638 ignition[805]: INFO : Ignition finished successfully Feb 9 18:49:44.873430 systemd[1]: Finished sysroot-boot.service. Feb 9 18:49:44.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:44.874801 systemd[1]: Finished ignition-mount.service. Feb 9 18:49:44.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:45.589136 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:49:45.594333 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 9 18:49:45.594358 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:49:45.594367 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:49:45.595391 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:49:45.597811 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:49:45.599570 systemd[1]: Starting ignition-files.service... Feb 9 18:49:45.612330 ignition[834]: INFO : Ignition 2.14.0 Feb 9 18:49:45.612330 ignition[834]: INFO : Stage: files Feb 9 18:49:45.613509 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:45.613509 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:45.615676 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:49:45.616556 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:49:45.616556 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:49:45.618825 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:49:45.619773 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:49:45.621001 unknown[834]: wrote ssh authorized keys file for user: core Feb 9 18:49:45.621729 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:49:45.622837 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:49:45.624386 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 18:49:45.978174 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:49:46.112041 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 18:49:46.112041 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:49:46.115416 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:49:46.115416 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:49:46.306353 systemd-networkd[717]: eth0: Gained IPv6LL Feb 9 18:49:46.409784 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:49:46.485017 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 18:49:46.487156 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:49:46.487156 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:49:46.487156 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:49:46.552743 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:49:46.788166 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 18:49:46.790308 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:49:46.790308 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:49:46.790308 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:49:46.835349 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:49:47.498630 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:49:47.501341 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 18:49:47.501341 ignition[834]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:49:47.532884 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:49:47.532908 kernel: audit: type=1130 audit(1707504587.525:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.524074 systemd[1]: Finished ignition-files.service. Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:49:47.534231 ignition[834]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:49:47.534231 ignition[834]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:49:47.534231 ignition[834]: INFO : files: files passed Feb 9 18:49:47.534231 ignition[834]: INFO : Ignition finished successfully Feb 9 18:49:47.564854 kernel: audit: type=1130 audit(1707504587.534:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.564880 kernel: audit: type=1131 audit(1707504587.534:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.564890 kernel: audit: type=1130 audit(1707504587.540:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.564901 kernel: audit: type=1130 audit(1707504587.557:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.564910 kernel: audit: type=1131 audit(1707504587.557:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.526827 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:49:47.530099 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:49:47.567447 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:49:47.530812 systemd[1]: Starting ignition-quench.service... Feb 9 18:49:47.569345 initrd-setup-root-after-ignition[862]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:49:47.533124 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:49:47.533199 systemd[1]: Finished ignition-quench.service. Feb 9 18:49:47.534419 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:49:47.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.540719 systemd[1]: Reached target ignition-complete.target. Feb 9 18:49:47.576613 kernel: audit: type=1130 audit(1707504587.573:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.545286 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:49:47.556193 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:49:47.556269 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:49:47.557515 systemd[1]: Reached target initrd-fs.target. Feb 9 18:49:47.562850 systemd[1]: Reached target initrd.target. Feb 9 18:49:47.562939 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:49:47.563588 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:49:47.572794 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:49:47.574115 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:49:47.582698 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:49:47.600689 kernel: audit: type=1131 audit(1707504587.584:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.600716 kernel: audit: type=1131 audit(1707504587.589:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.600726 kernel: audit: type=1131 audit(1707504587.591:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.583707 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:49:47.584430 systemd[1]: Stopped target timers.target. Feb 9 18:49:47.584638 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:49:47.584720 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:49:47.584953 systemd[1]: Stopped target initrd.target. Feb 9 18:49:47.587654 systemd[1]: Stopped target basic.target. Feb 9 18:49:47.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.587789 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:49:47.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.587923 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:49:47.588072 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:49:47.588220 systemd[1]: Stopped target remote-fs.target. Feb 9 18:49:47.588351 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:49:47.588495 systemd[1]: Stopped target sysinit.target. Feb 9 18:49:47.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.588630 systemd[1]: Stopped target local-fs.target. Feb 9 18:49:47.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.617818 ignition[875]: INFO : Ignition 2.14.0 Feb 9 18:49:47.617818 ignition[875]: INFO : Stage: umount Feb 9 18:49:47.617818 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:49:47.617818 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:49:47.617818 ignition[875]: INFO : umount: umount passed Feb 9 18:49:47.617818 ignition[875]: INFO : Ignition finished successfully Feb 9 18:49:47.588770 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:49:47.588890 systemd[1]: Stopped target swap.target. Feb 9 18:49:47.588989 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:49:47.589067 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:49:47.589310 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:49:47.591689 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:49:47.591767 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:49:47.591957 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:49:47.592042 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:49:47.594502 systemd[1]: Stopped target paths.target. Feb 9 18:49:47.594762 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:49:47.601297 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:49:47.602348 systemd[1]: Stopped target slices.target. Feb 9 18:49:47.603454 systemd[1]: Stopped target sockets.target. Feb 9 18:49:47.604733 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:49:47.604800 systemd[1]: Closed iscsid.socket. Feb 9 18:49:47.605843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:49:47.605977 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:49:47.607127 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:49:47.607235 systemd[1]: Stopped ignition-files.service. Feb 9 18:49:47.609001 systemd[1]: Stopping ignition-mount.service... Feb 9 18:49:47.610247 systemd[1]: Stopping iscsiuio.service... Feb 9 18:49:47.611256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:49:47.611387 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:49:47.614100 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:49:47.614661 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:49:47.614841 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:49:47.616026 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:49:47.616121 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:49:47.625895 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:49:47.626698 systemd[1]: Stopped iscsiuio.service. Feb 9 18:49:47.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.643531 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:49:47.644875 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:49:47.645744 systemd[1]: Stopped ignition-mount.service. Feb 9 18:49:47.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.647442 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:49:47.648288 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:49:47.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.649995 systemd[1]: Stopped target network.target. Feb 9 18:49:47.651426 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:49:47.651460 systemd[1]: Closed iscsiuio.socket. Feb 9 18:49:47.653300 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:49:47.653343 systemd[1]: Stopped ignition-disks.service. Feb 9 18:49:47.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.655373 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:49:47.655412 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:49:47.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.657531 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:49:47.657573 systemd[1]: Stopped ignition-setup.service. Feb 9 18:49:47.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.659611 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:49:47.659650 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:49:47.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.661885 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:49:47.663343 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:49:47.664909 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:49:47.665714 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:49:47.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.669244 systemd-networkd[717]: eth0: DHCPv6 lease lost Feb 9 18:49:47.670408 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:49:47.670524 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:49:47.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.671626 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:49:47.671652 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:49:47.676071 systemd[1]: Stopping network-cleanup.service... Feb 9 18:49:47.675000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:49:47.676719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:49:47.677502 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:49:47.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.679788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:49:47.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.679830 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:49:47.681618 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:49:47.682136 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:49:47.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.684222 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:49:47.686322 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:49:47.687566 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:49:47.688348 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:49:47.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.691874 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:49:47.692000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:49:47.692820 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:49:47.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.694390 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:49:47.695193 systemd[1]: Stopped network-cleanup.service. Feb 9 18:49:47.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.696692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:49:47.697848 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:49:47.699437 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:49:47.699468 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:49:47.701591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:49:47.702398 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:49:47.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.703627 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:49:47.704358 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:49:47.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.705762 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:49:47.706640 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:49:47.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.708539 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:49:47.709877 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:49:47.709915 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:49:47.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.712938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:49:47.713918 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:49:47.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:47.715574 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:49:47.717579 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:49:47.729511 systemd[1]: Switching root. Feb 9 18:49:47.750120 iscsid[724]: iscsid shutting down. Feb 9 18:49:47.750864 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 9 18:49:47.750915 systemd-journald[198]: Journal stopped Feb 9 18:49:50.781118 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:49:50.781180 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:49:50.781198 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:49:50.781249 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:49:50.781261 kernel: SELinux: policy capability open_perms=1 Feb 9 18:49:50.781277 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:49:50.781290 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:49:50.781302 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:49:50.781314 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:49:50.781326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:49:50.781338 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:49:50.781356 systemd[1]: Successfully loaded SELinux policy in 36.651ms. Feb 9 18:49:50.781380 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.496ms. Feb 9 18:49:50.781391 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:49:50.781403 systemd[1]: Detected virtualization kvm. Feb 9 18:49:50.781417 systemd[1]: Detected architecture x86-64. Feb 9 18:49:50.781435 systemd[1]: Detected first boot. Feb 9 18:49:50.781447 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:49:50.781458 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:49:50.781467 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:49:50.781477 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:49:50.781488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:49:50.781499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:49:50.781511 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:49:50.781521 systemd[1]: Stopped iscsid.service. Feb 9 18:49:50.781535 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:49:50.781549 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:49:50.781560 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:49:50.781573 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:49:50.781587 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:49:50.781600 systemd[1]: Created slice system-getty.slice. Feb 9 18:49:50.781614 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:49:50.781630 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:49:50.781645 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:49:50.781660 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:49:50.781674 systemd[1]: Created slice user.slice. Feb 9 18:49:50.781689 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:49:50.781705 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:49:50.781720 systemd[1]: Set up automount boot.automount. Feb 9 18:49:50.781734 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:49:50.781748 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:49:50.781763 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:49:50.781780 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:49:50.781793 systemd[1]: Reached target integritysetup.target. Feb 9 18:49:50.781806 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:49:50.781820 systemd[1]: Reached target remote-fs.target. Feb 9 18:49:50.781834 systemd[1]: Reached target slices.target. Feb 9 18:49:50.781847 systemd[1]: Reached target swap.target. Feb 9 18:49:50.781860 systemd[1]: Reached target torcx.target. Feb 9 18:49:50.781876 systemd[1]: Reached target veritysetup.target. Feb 9 18:49:50.781899 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:49:50.781913 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:49:50.781927 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:49:50.781941 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:49:50.781956 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:49:50.781970 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:49:50.781983 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:49:50.781998 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:49:50.782011 systemd[1]: Mounting media.mount... Feb 9 18:49:50.782027 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:49:50.782041 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:49:50.782054 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:49:50.782067 systemd[1]: Mounting tmp.mount... Feb 9 18:49:50.782080 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:49:50.782094 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:49:50.782107 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:49:50.782121 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:49:50.782134 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:49:50.782149 systemd[1]: Starting modprobe@drm.service... Feb 9 18:49:50.782163 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:49:50.782177 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:49:50.782190 systemd[1]: Starting modprobe@loop.service... Feb 9 18:49:50.782218 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:49:50.782232 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:49:50.782246 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:49:50.782261 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:49:50.782278 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:49:50.782291 kernel: loop: module loaded Feb 9 18:49:50.782304 systemd[1]: Stopped systemd-journald.service. Feb 9 18:49:50.782316 kernel: fuse: init (API version 7.34) Feb 9 18:49:50.782328 systemd[1]: Starting systemd-journald.service... Feb 9 18:49:50.782337 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:49:50.782347 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:49:50.782357 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:49:50.782367 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:49:50.782377 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:49:50.782388 systemd[1]: Stopped verity-setup.service. Feb 9 18:49:50.782398 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:49:50.782408 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:49:50.782418 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:49:50.782427 systemd[1]: Mounted media.mount. Feb 9 18:49:50.782437 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:49:50.782450 systemd-journald[980]: Journal started Feb 9 18:49:50.782491 systemd-journald[980]: Runtime Journal (/run/log/journal/1e2fdcab318947ee8d1b9219f62d11ab) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:49:47.808000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:49:48.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:49:48.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:49:48.507000 audit: BPF prog-id=10 op=LOAD Feb 9 18:49:48.507000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:49:48.508000 audit: BPF prog-id=11 op=LOAD Feb 9 18:49:48.508000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:49:48.539000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:49:48.539000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:49:48.539000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:49:48.541000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:49:48.541000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b9 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:49:48.541000 audit: CWD cwd="/" Feb 9 18:49:48.541000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:48.541000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:48.541000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:49:50.660000 audit: BPF prog-id=12 op=LOAD Feb 9 18:49:50.660000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:49:50.660000 audit: BPF prog-id=13 op=LOAD Feb 9 18:49:50.660000 audit: BPF prog-id=14 op=LOAD Feb 9 18:49:50.660000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:49:50.660000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:49:50.661000 audit: BPF prog-id=15 op=LOAD Feb 9 18:49:50.661000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:49:50.661000 audit: BPF prog-id=16 op=LOAD Feb 9 18:49:50.661000 audit: BPF prog-id=17 op=LOAD Feb 9 18:49:50.661000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:49:50.661000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:49:50.662000 audit: BPF prog-id=18 op=LOAD Feb 9 18:49:50.662000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:49:50.662000 audit: BPF prog-id=19 op=LOAD Feb 9 18:49:50.662000 audit: BPF prog-id=20 op=LOAD Feb 9 18:49:50.662000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:49:50.662000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:49:50.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.668000 audit: BPF prog-id=18 op=UNLOAD Feb 9 18:49:50.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.747000 audit: BPF prog-id=21 op=LOAD Feb 9 18:49:50.747000 audit: BPF prog-id=22 op=LOAD Feb 9 18:49:50.747000 audit: BPF prog-id=23 op=LOAD Feb 9 18:49:50.747000 audit: BPF prog-id=19 op=UNLOAD Feb 9 18:49:50.747000 audit: BPF prog-id=20 op=UNLOAD Feb 9 18:49:50.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.783658 systemd[1]: Started systemd-journald.service. Feb 9 18:49:50.779000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:49:50.779000 audit[980]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff9f078770 a2=4000 a3=7fff9f07880c items=0 ppid=1 pid=980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:49:50.779000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:49:48.538651 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:49:50.659126 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:49:48.538850 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:49:50.659136 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:49:48.538865 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:49:50.662859 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:49:48.538890 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:49:50.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:48.538899 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:49:48.538924 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:49:50.784396 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:49:48.538935 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:49:48.539132 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:49:48.539161 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:49:48.539172 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:49:48.539487 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:49:48.539515 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:49:48.539535 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:49:48.539552 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:49:50.785187 systemd[1]: Mounted tmp.mount. Feb 9 18:49:48.539572 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:49:48.539588 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:49:50.386369 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:49:50.386632 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:49:50.386724 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:49:50.386883 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:49:50.386941 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:49:50.386995 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-02-09T18:49:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:49:50.786155 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:49:50.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.787162 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:49:50.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.787992 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:49:50.788135 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:49:50.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.788970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:49:50.789070 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:49:50.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.789883 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:49:50.790036 systemd[1]: Finished modprobe@drm.service. Feb 9 18:49:50.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.790894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:49:50.791040 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:49:50.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.791851 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:49:50.792015 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:49:50.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.792837 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:49:50.793004 systemd[1]: Finished modprobe@loop.service. Feb 9 18:49:50.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.793851 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:49:50.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.794700 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:49:50.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.795533 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:49:50.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.796557 systemd[1]: Reached target network-pre.target. Feb 9 18:49:50.798244 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:49:50.799787 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:49:50.800451 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:49:50.801606 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:49:50.803219 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:49:50.803866 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:49:50.804742 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:49:50.805398 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:49:50.806253 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:49:50.807747 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:49:50.808774 systemd-journald[980]: Time spent on flushing to /var/log/journal/1e2fdcab318947ee8d1b9219f62d11ab is 18.383ms for 1109 entries. Feb 9 18:49:50.808774 systemd-journald[980]: System Journal (/var/log/journal/1e2fdcab318947ee8d1b9219f62d11ab) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:49:51.098772 systemd-journald[980]: Received client request to flush runtime journal. Feb 9 18:49:50.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:50.810632 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:49:50.811425 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:49:51.099533 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:49:50.812100 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:49:50.813622 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:49:50.867898 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:49:50.874861 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:49:50.954494 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:49:50.955592 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:49:51.099767 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:49:51.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.448629 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:49:51.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.449000 audit: BPF prog-id=24 op=LOAD Feb 9 18:49:51.449000 audit: BPF prog-id=25 op=LOAD Feb 9 18:49:51.449000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:49:51.449000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:49:51.450667 systemd[1]: Starting systemd-udevd.service... Feb 9 18:49:51.465154 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Feb 9 18:49:51.475625 systemd[1]: Started systemd-udevd.service. Feb 9 18:49:51.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.477000 audit: BPF prog-id=26 op=LOAD Feb 9 18:49:51.477859 systemd[1]: Starting systemd-networkd.service... Feb 9 18:49:51.486000 audit: BPF prog-id=27 op=LOAD Feb 9 18:49:51.486000 audit: BPF prog-id=28 op=LOAD Feb 9 18:49:51.486000 audit: BPF prog-id=29 op=LOAD Feb 9 18:49:51.487829 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:49:51.497404 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 18:49:51.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.516672 systemd[1]: Started systemd-userdbd.service. Feb 9 18:49:51.534225 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 18:49:51.539236 kernel: ACPI: button: Power Button [PWRF] Feb 9 18:49:51.542637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:49:51.541000 audit[1024]: AVC avc: denied { confidentiality } for pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:49:51.541000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c445624ec0 a1=32194 a2=7fdec80b5bc5 a3=5 items=108 ppid=1016 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:49:51.541000 audit: CWD cwd="/" Feb 9 18:49:51.541000 audit: PATH item=0 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=1 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=2 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=3 name=(null) inode=14814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=4 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=5 name=(null) inode=14815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=6 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=7 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=8 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=9 name=(null) inode=14817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=10 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=11 name=(null) inode=14818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=12 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=13 name=(null) inode=14819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=14 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=15 name=(null) inode=14820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=16 name=(null) inode=14816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=17 name=(null) inode=14821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=18 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=19 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=20 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=21 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=22 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=23 name=(null) inode=14824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=24 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=25 name=(null) inode=14825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=26 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=27 name=(null) inode=14826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=28 name=(null) inode=14822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=29 name=(null) inode=14827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=30 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=31 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=32 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=33 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=34 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=35 name=(null) inode=14830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=36 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=37 name=(null) inode=14831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=38 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=39 name=(null) inode=14832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=40 name=(null) inode=14828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=41 name=(null) inode=14833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=42 name=(null) inode=14813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=43 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=44 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=45 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=46 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=47 name=(null) inode=14836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=48 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=49 name=(null) inode=14837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=50 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=51 name=(null) inode=14838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=52 name=(null) inode=14834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=53 name=(null) inode=14839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=54 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=55 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=56 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=57 name=(null) inode=14841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=58 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=59 name=(null) inode=14842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=60 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=61 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=62 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=63 name=(null) inode=14844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=64 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=65 name=(null) inode=14845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=66 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=67 name=(null) inode=14846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=68 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=69 name=(null) inode=14847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=70 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=71 name=(null) inode=14848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=72 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=73 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=74 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=75 name=(null) inode=14850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=76 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=77 name=(null) inode=14851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=78 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=79 name=(null) inode=14852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=80 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=81 name=(null) inode=14853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=82 name=(null) inode=14849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=83 name=(null) inode=14854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=84 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=85 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=86 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=87 name=(null) inode=14856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=88 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=89 name=(null) inode=14857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=90 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=91 name=(null) inode=14858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=92 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=93 name=(null) inode=14859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=94 name=(null) inode=14855 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=95 name=(null) inode=14860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=96 name=(null) inode=14840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=97 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=98 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=99 name=(null) inode=14862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=100 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=101 name=(null) inode=14863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=102 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=103 name=(null) inode=14864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=104 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=105 name=(null) inode=14865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=106 name=(null) inode=14861 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PATH item=107 name=(null) inode=14866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:49:51.541000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:49:51.555238 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 18:49:51.568227 systemd-networkd[1026]: lo: Link UP Feb 9 18:49:51.568239 systemd-networkd[1026]: lo: Gained carrier Feb 9 18:49:51.568680 systemd-networkd[1026]: Enumeration completed Feb 9 18:49:51.568787 systemd[1]: Started systemd-networkd.service. Feb 9 18:49:51.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.569810 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:49:51.570751 systemd-networkd[1026]: eth0: Link UP Feb 9 18:49:51.570762 systemd-networkd[1026]: eth0: Gained carrier Feb 9 18:49:51.582231 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 18:49:51.583329 systemd-networkd[1026]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:49:51.596228 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:49:51.643240 kernel: kvm: Nested Virtualization enabled Feb 9 18:49:51.643342 kernel: SVM: kvm: Nested Paging enabled Feb 9 18:49:51.643359 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 18:49:51.643373 kernel: SVM: Virtual GIF supported Feb 9 18:49:51.660231 kernel: EDAC MC: Ver: 3.0.0 Feb 9 18:49:51.678647 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:49:51.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.680491 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:49:51.687823 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:49:51.714219 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:49:51.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.715065 systemd[1]: Reached target cryptsetup.target. Feb 9 18:49:51.716729 systemd[1]: Starting lvm2-activation.service... Feb 9 18:49:51.720139 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:49:51.746952 systemd[1]: Finished lvm2-activation.service. Feb 9 18:49:51.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.747637 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:49:51.748268 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:49:51.748290 systemd[1]: Reached target local-fs.target. Feb 9 18:49:51.748861 systemd[1]: Reached target machines.target. Feb 9 18:49:51.750324 systemd[1]: Starting ldconfig.service... Feb 9 18:49:51.751081 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:49:51.751116 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:49:51.751926 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:49:51.753381 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:49:51.754923 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:49:51.755870 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:49:51.755916 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:49:51.756858 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:49:51.759959 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Feb 9 18:49:51.760763 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:49:51.764947 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:49:51.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.794523 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) Feb 9 18:49:51.794523 systemd-fsck[1062]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 18:49:51.778962 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:49:51.780506 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:49:51.782584 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:49:51.796621 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:49:51.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:51.799215 systemd[1]: Mounting boot.mount... Feb 9 18:49:52.004505 systemd[1]: Mounted boot.mount. Feb 9 18:49:52.014412 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:49:52.014926 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:49:52.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.020553 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:49:52.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.061730 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:49:52.066783 systemd[1]: Finished ldconfig.service. Feb 9 18:49:52.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.082275 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:49:52.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.084072 systemd[1]: Starting audit-rules.service... Feb 9 18:49:52.087000 audit: BPF prog-id=30 op=LOAD Feb 9 18:49:52.085348 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:49:52.086788 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:49:52.091000 audit: BPF prog-id=31 op=LOAD Feb 9 18:49:52.089303 systemd[1]: Starting systemd-resolved.service... Feb 9 18:49:52.092936 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:49:52.094378 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:49:52.095485 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:49:52.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.096451 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:49:52.101000 audit[1076]: SYSTEM_BOOT pid=1076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.102863 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:49:52.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:49:52.105000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:49:52.105000 audit[1085]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcd7bc390 a2=420 a3=0 items=0 ppid=1065 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:49:52.105000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:49:52.106144 augenrules[1085]: No rules Feb 9 18:49:52.106673 systemd[1]: Finished audit-rules.service. Feb 9 18:49:52.116019 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:49:52.117903 systemd[1]: Starting systemd-update-done.service... Feb 9 18:49:52.122618 systemd[1]: Finished systemd-update-done.service. Feb 9 18:49:52.142214 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:49:52.143226 systemd[1]: Reached target time-set.target. Feb 9 18:49:52.877833 systemd-timesyncd[1075]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:49:52.878074 systemd-timesyncd[1075]: Initial clock synchronization to Fri 2024-02-09 18:49:52.877774 UTC. Feb 9 18:49:52.880866 systemd-resolved[1069]: Positive Trust Anchors: Feb 9 18:49:52.880878 systemd-resolved[1069]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:49:52.880906 systemd-resolved[1069]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:49:52.886958 systemd-resolved[1069]: Defaulting to hostname 'linux'. Feb 9 18:49:52.888198 systemd[1]: Started systemd-resolved.service. Feb 9 18:49:52.888853 systemd[1]: Reached target network.target. Feb 9 18:49:52.889422 systemd[1]: Reached target nss-lookup.target. Feb 9 18:49:52.890011 systemd[1]: Reached target sysinit.target. Feb 9 18:49:52.890639 systemd[1]: Started motdgen.path. Feb 9 18:49:52.891159 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:49:52.892061 systemd[1]: Started logrotate.timer. Feb 9 18:49:52.892716 systemd[1]: Started mdadm.timer. Feb 9 18:49:52.893212 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:49:52.893829 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:49:52.893850 systemd[1]: Reached target paths.target. Feb 9 18:49:52.894437 systemd[1]: Reached target timers.target. Feb 9 18:49:52.895283 systemd[1]: Listening on dbus.socket. Feb 9 18:49:52.896864 systemd[1]: Starting docker.socket... Feb 9 18:49:52.899168 systemd[1]: Listening on sshd.socket. Feb 9 18:49:52.899816 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:49:52.900122 systemd[1]: Listening on docker.socket. Feb 9 18:49:52.900730 systemd[1]: Reached target sockets.target. Feb 9 18:49:52.901291 systemd[1]: Reached target basic.target. Feb 9 18:49:52.901871 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:49:52.901890 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:49:52.902555 systemd[1]: Starting containerd.service... Feb 9 18:49:52.903909 systemd[1]: Starting dbus.service... Feb 9 18:49:52.905053 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:49:52.906497 systemd[1]: Starting extend-filesystems.service... Feb 9 18:49:52.907214 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:49:52.907944 systemd[1]: Starting motdgen.service... Feb 9 18:49:52.910316 jq[1096]: false Feb 9 18:49:52.909295 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:49:52.910920 systemd[1]: Starting prepare-critools.service... Feb 9 18:49:52.912181 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:49:52.913597 systemd[1]: Starting sshd-keygen.service... Feb 9 18:49:52.916287 systemd[1]: Starting systemd-logind.service... Feb 9 18:49:52.918578 dbus-daemon[1095]: [system] SELinux support is enabled Feb 9 18:49:52.918746 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:49:52.918781 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:49:52.919156 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:49:52.919652 systemd[1]: Starting update-engine.service... Feb 9 18:49:52.920886 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:49:52.922050 systemd[1]: Started dbus.service. Feb 9 18:49:52.923464 jq[1115]: true Feb 9 18:49:52.924977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:49:52.925129 extend-filesystems[1097]: Found sr0 Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda1 Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda2 Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda3 Feb 9 18:49:52.925129 extend-filesystems[1097]: Found usr Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda4 Feb 9 18:49:52.925129 extend-filesystems[1097]: Found vda6 Feb 9 18:49:52.925103 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:49:52.929763 extend-filesystems[1097]: Found vda7 Feb 9 18:49:52.929763 extend-filesystems[1097]: Found vda9 Feb 9 18:49:52.929763 extend-filesystems[1097]: Checking size of /dev/vda9 Feb 9 18:49:52.925801 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:49:52.926853 systemd[1]: Finished motdgen.service. Feb 9 18:49:52.932116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:49:52.932229 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:49:52.934641 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:49:52.934663 systemd[1]: Reached target system-config.target. Feb 9 18:49:52.945386 tar[1120]: crictl Feb 9 18:49:52.935788 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:49:52.945699 tar[1119]: ./ Feb 9 18:49:52.945699 tar[1119]: ./loopback Feb 9 18:49:52.945829 jq[1121]: true Feb 9 18:49:52.935801 systemd[1]: Reached target user-config.target. Feb 9 18:49:52.954047 env[1122]: time="2024-02-09T18:49:52.953989263Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:49:52.958501 update_engine[1113]: I0209 18:49:52.958146 1113 main.cc:92] Flatcar Update Engine starting Feb 9 18:49:52.962419 update_engine[1113]: I0209 18:49:52.962394 1113 update_check_scheduler.cc:74] Next update check in 2m9s Feb 9 18:49:52.962483 systemd[1]: Started update-engine.service. Feb 9 18:49:52.964561 extend-filesystems[1097]: Resized partition /dev/vda9 Feb 9 18:49:52.965500 extend-filesystems[1152]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:49:52.968056 bash[1148]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:49:52.970485 systemd[1]: Started locksmithd.service. Feb 9 18:49:52.971413 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:49:52.974632 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:49:52.989402 env[1122]: time="2024-02-09T18:49:52.989311592Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:49:52.989502 env[1122]: time="2024-02-09T18:49:52.989455212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:52.990605 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:49:52.991261 env[1122]: time="2024-02-09T18:49:52.991207517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:49:52.991261 env[1122]: time="2024-02-09T18:49:52.991256800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009158846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009192800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009220401Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009237524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009328023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009558976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009706954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009721371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009778147Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:49:53.011689 env[1122]: time="2024-02-09T18:49:53.009788877Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:49:53.011883 tar[1119]: ./bandwidth Feb 9 18:49:53.008756 systemd-logind[1109]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 18:49:53.008771 systemd-logind[1109]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:49:53.010059 systemd-logind[1109]: New seat seat0. Feb 9 18:49:53.011843 systemd[1]: Started systemd-logind.service. Feb 9 18:49:53.012875 extend-filesystems[1152]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:49:53.012875 extend-filesystems[1152]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:49:53.012875 extend-filesystems[1152]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:49:53.015650 extend-filesystems[1097]: Resized filesystem in /dev/vda9 Feb 9 18:49:53.016338 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:49:53.016470 systemd[1]: Finished extend-filesystems.service. Feb 9 18:49:53.026576 env[1122]: time="2024-02-09T18:49:53.026418588Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:49:53.026831 env[1122]: time="2024-02-09T18:49:53.026696489Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.026879793Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.026953080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.026968199Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.026981303Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027043830Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027057386Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027070761Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027086551Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027097571Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027108361Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027203730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027273952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027533318Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:49:53.028477 env[1122]: time="2024-02-09T18:49:53.027568374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027579825Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027639367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027652401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027663472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027673541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027739735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027752940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027762658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027772577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027785852Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027874147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027887773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027899054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.028777 env[1122]: time="2024-02-09T18:49:53.027909443Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:49:53.029028 env[1122]: time="2024-02-09T18:49:53.027924101Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:49:53.029028 env[1122]: time="2024-02-09T18:49:53.027935102Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:49:53.029028 env[1122]: time="2024-02-09T18:49:53.027954548Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:49:53.029028 env[1122]: time="2024-02-09T18:49:53.027986087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:49:53.029105 env[1122]: time="2024-02-09T18:49:53.028161917Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:49:53.029105 env[1122]: time="2024-02-09T18:49:53.028209476Z" level=info msg="Connect containerd service" Feb 9 18:49:53.029105 env[1122]: time="2024-02-09T18:49:53.028245654Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:49:53.029956 env[1122]: time="2024-02-09T18:49:53.029832359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:49:53.030039 env[1122]: time="2024-02-09T18:49:53.030008529Z" level=info msg="Start subscribing containerd event" Feb 9 18:49:53.030132 env[1122]: time="2024-02-09T18:49:53.030115360Z" level=info msg="Start recovering state" Feb 9 18:49:53.030254 env[1122]: time="2024-02-09T18:49:53.030237889Z" level=info msg="Start event monitor" Feb 9 18:49:53.030330 env[1122]: time="2024-02-09T18:49:53.030312950Z" level=info msg="Start snapshots syncer" Feb 9 18:49:53.030407 env[1122]: time="2024-02-09T18:49:53.030390004Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:49:53.030476 env[1122]: time="2024-02-09T18:49:53.030459184Z" level=info msg="Start streaming server" Feb 9 18:49:53.030860 env[1122]: time="2024-02-09T18:49:53.030845168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:49:53.030968 env[1122]: time="2024-02-09T18:49:53.030951818Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:49:53.031139 systemd[1]: Started containerd.service. Feb 9 18:49:53.055429 tar[1119]: ./ptp Feb 9 18:49:53.055448 locksmithd[1154]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:49:53.056148 env[1122]: time="2024-02-09T18:49:53.056118856Z" level=info msg="containerd successfully booted in 0.103456s" Feb 9 18:49:53.090186 tar[1119]: ./vlan Feb 9 18:49:53.120623 tar[1119]: ./host-device Feb 9 18:49:53.148916 tar[1119]: ./tuning Feb 9 18:49:53.173888 tar[1119]: ./vrf Feb 9 18:49:53.199957 tar[1119]: ./sbr Feb 9 18:49:53.225564 tar[1119]: ./tap Feb 9 18:49:53.240077 sshd_keygen[1116]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:49:53.257245 tar[1119]: ./dhcp Feb 9 18:49:53.257577 systemd[1]: Finished sshd-keygen.service. Feb 9 18:49:53.259551 systemd[1]: Starting issuegen.service... Feb 9 18:49:53.264551 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:49:53.264702 systemd[1]: Finished issuegen.service. Feb 9 18:49:53.266668 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:49:53.272127 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:49:53.273885 systemd[1]: Started getty@tty1.service. Feb 9 18:49:53.275387 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 18:49:53.276128 systemd[1]: Reached target getty.target. Feb 9 18:49:53.333301 tar[1119]: ./static Feb 9 18:49:53.354285 tar[1119]: ./firewall Feb 9 18:49:53.371606 systemd[1]: Finished prepare-critools.service. Feb 9 18:49:53.376747 systemd-networkd[1026]: eth0: Gained IPv6LL Feb 9 18:49:53.389205 tar[1119]: ./macvlan Feb 9 18:49:53.420507 tar[1119]: ./dummy Feb 9 18:49:53.451298 tar[1119]: ./bridge Feb 9 18:49:53.484962 tar[1119]: ./ipvlan Feb 9 18:49:53.516076 tar[1119]: ./portmap Feb 9 18:49:53.545483 tar[1119]: ./host-local Feb 9 18:49:53.580059 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:49:53.580936 systemd[1]: Reached target multi-user.target. Feb 9 18:49:53.582511 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:49:53.588774 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:49:53.588885 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:49:53.589641 systemd[1]: Startup finished in 524ms (kernel) + 5.102s (initrd) + 5.084s (userspace) = 10.712s. Feb 9 18:49:53.752541 systemd[1]: Created slice system-sshd.slice. Feb 9 18:49:53.753619 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:41372.service. Feb 9 18:49:53.796133 sshd[1180]: Accepted publickey for core from 10.0.0.1 port 41372 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:49:53.797373 sshd[1180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:53.805497 systemd-logind[1109]: New session 1 of user core. Feb 9 18:49:53.806691 systemd[1]: Created slice user-500.slice. Feb 9 18:49:53.807959 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:49:53.815325 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:49:53.816463 systemd[1]: Starting user@500.service... Feb 9 18:49:53.818542 (systemd)[1183]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:53.882089 systemd[1183]: Queued start job for default target default.target. Feb 9 18:49:53.882492 systemd[1183]: Reached target paths.target. Feb 9 18:49:53.882513 systemd[1183]: Reached target sockets.target. Feb 9 18:49:53.882525 systemd[1183]: Reached target timers.target. Feb 9 18:49:53.882536 systemd[1183]: Reached target basic.target. Feb 9 18:49:53.882567 systemd[1183]: Reached target default.target. Feb 9 18:49:53.882603 systemd[1183]: Startup finished in 59ms. Feb 9 18:49:53.882653 systemd[1]: Started user@500.service. Feb 9 18:49:53.883486 systemd[1]: Started session-1.scope. Feb 9 18:49:53.933641 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:41382.service. Feb 9 18:49:53.974695 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 41382 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:49:53.975702 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:53.979067 systemd-logind[1109]: New session 2 of user core. Feb 9 18:49:53.979802 systemd[1]: Started session-2.scope. Feb 9 18:49:54.031122 sshd[1192]: pam_unix(sshd:session): session closed for user core Feb 9 18:49:54.034036 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:41382.service: Deactivated successfully. Feb 9 18:49:54.034506 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:49:54.034908 systemd-logind[1109]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:49:54.035899 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:41384.service. Feb 9 18:49:54.036567 systemd-logind[1109]: Removed session 2. Feb 9 18:49:54.074116 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 41384 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:49:54.075060 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:54.078162 systemd-logind[1109]: New session 3 of user core. Feb 9 18:49:54.078872 systemd[1]: Started session-3.scope. Feb 9 18:49:54.127474 sshd[1198]: pam_unix(sshd:session): session closed for user core Feb 9 18:49:54.130236 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:41384.service: Deactivated successfully. Feb 9 18:49:54.130750 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:49:54.131295 systemd-logind[1109]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:49:54.132423 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:41394.service. Feb 9 18:49:54.133099 systemd-logind[1109]: Removed session 3. Feb 9 18:49:54.171268 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 41394 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:49:54.172198 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:54.175396 systemd-logind[1109]: New session 4 of user core. Feb 9 18:49:54.176070 systemd[1]: Started session-4.scope. Feb 9 18:49:54.227192 sshd[1204]: pam_unix(sshd:session): session closed for user core Feb 9 18:49:54.229572 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:41394.service: Deactivated successfully. Feb 9 18:49:54.230176 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:49:54.230693 systemd-logind[1109]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:49:54.231627 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:41406.service. Feb 9 18:49:54.232287 systemd-logind[1109]: Removed session 4. Feb 9 18:49:54.269713 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:49:54.270614 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:49:54.273536 systemd-logind[1109]: New session 5 of user core. Feb 9 18:49:54.274260 systemd[1]: Started session-5.scope. Feb 9 18:49:54.327674 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:49:54.327835 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:49:54.830957 systemd[1]: Reloading. Feb 9 18:49:54.883538 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2024-02-09T18:49:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:49:54.883871 /usr/lib/systemd/system-generators/torcx-generator[1245]: time="2024-02-09T18:49:54Z" level=info msg="torcx already run" Feb 9 18:49:54.941950 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:49:54.941965 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:49:54.958499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:49:55.025795 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:49:55.490086 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:49:55.490575 systemd[1]: Reached target network-online.target. Feb 9 18:49:55.492008 systemd[1]: Started kubelet.service. Feb 9 18:49:55.502235 systemd[1]: Starting coreos-metadata.service... Feb 9 18:49:55.508355 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 18:49:55.508492 systemd[1]: Finished coreos-metadata.service. Feb 9 18:49:55.532161 kubelet[1284]: E0209 18:49:55.532106 1284 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:49:55.534114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:49:55.534267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:49:55.676452 systemd[1]: Stopped kubelet.service. Feb 9 18:49:55.692161 systemd[1]: Reloading. Feb 9 18:49:55.756598 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2024-02-09T18:49:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:49:55.756625 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2024-02-09T18:49:55Z" level=info msg="torcx already run" Feb 9 18:49:55.812244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:49:55.812261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:49:55.828454 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:49:55.898849 systemd[1]: Started kubelet.service. Feb 9 18:49:55.932517 kubelet[1392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:49:55.932517 kubelet[1392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:49:55.932517 kubelet[1392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:49:55.932861 kubelet[1392]: I0209 18:49:55.932589 1392 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:49:56.245219 kubelet[1392]: I0209 18:49:56.245187 1392 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 18:49:56.245219 kubelet[1392]: I0209 18:49:56.245212 1392 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:49:56.245460 kubelet[1392]: I0209 18:49:56.245410 1392 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 18:49:56.246866 kubelet[1392]: I0209 18:49:56.246840 1392 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:49:56.252476 kubelet[1392]: I0209 18:49:56.252455 1392 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:49:56.252674 kubelet[1392]: I0209 18:49:56.252655 1392 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:49:56.252822 kubelet[1392]: I0209 18:49:56.252804 1392 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 18:49:56.252948 kubelet[1392]: I0209 18:49:56.252829 1392 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 18:49:56.252948 kubelet[1392]: I0209 18:49:56.252839 1392 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 18:49:56.252948 kubelet[1392]: I0209 18:49:56.252939 1392 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:49:56.253036 kubelet[1392]: I0209 18:49:56.253022 1392 kubelet.go:393] "Attempting to sync node with API server" Feb 9 18:49:56.253072 kubelet[1392]: I0209 18:49:56.253038 1392 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:49:56.253072 kubelet[1392]: I0209 18:49:56.253059 1392 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:49:56.253131 kubelet[1392]: I0209 18:49:56.253074 1392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:49:56.253192 kubelet[1392]: E0209 18:49:56.253178 1392 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:49:56.253227 kubelet[1392]: E0209 18:49:56.253193 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:49:56.253824 kubelet[1392]: I0209 18:49:56.253809 1392 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:49:56.254047 kubelet[1392]: W0209 18:49:56.254032 1392 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:49:56.254404 kubelet[1392]: I0209 18:49:56.254392 1392 server.go:1232] "Started kubelet" Feb 9 18:49:56.254670 kubelet[1392]: I0209 18:49:56.254651 1392 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:49:56.254763 kubelet[1392]: I0209 18:49:56.254746 1392 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:49:56.255339 kubelet[1392]: I0209 18:49:56.254967 1392 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 18:49:56.255339 kubelet[1392]: E0209 18:49:56.255177 1392 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:49:56.255339 kubelet[1392]: E0209 18:49:56.255198 1392 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:49:56.255441 kubelet[1392]: I0209 18:49:56.255434 1392 server.go:462] "Adding debug handlers to kubelet server" Feb 9 18:49:56.257092 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:49:56.257221 kubelet[1392]: I0209 18:49:56.257205 1392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:49:56.257383 kubelet[1392]: I0209 18:49:56.257367 1392 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 18:49:56.258048 kubelet[1392]: E0209 18:49:56.258021 1392 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.31\" not found" Feb 9 18:49:56.258548 kubelet[1392]: I0209 18:49:56.258527 1392 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:49:56.258609 kubelet[1392]: I0209 18:49:56.258600 1392 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 18:49:56.260291 kubelet[1392]: E0209 18:49:56.260198 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b246623016c495", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 254377109, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 254377109, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.260594 kubelet[1392]: W0209 18:49:56.260562 1392 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:49:56.260639 kubelet[1392]: E0209 18:49:56.260611 1392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:49:56.265668 kubelet[1392]: E0209 18:49:56.265642 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 18:49:56.266675 kubelet[1392]: W0209 18:49:56.266662 1392 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:49:56.266773 kubelet[1392]: E0209 18:49:56.266760 1392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:49:56.266857 kubelet[1392]: E0209 18:49:56.266648 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b2466230232221", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 255187489, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 255187489, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.267288 kubelet[1392]: W0209 18:49:56.267264 1392 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:49:56.267333 kubelet[1392]: E0209 18:49:56.267308 1392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:49:56.279576 kubelet[1392]: I0209 18:49:56.279536 1392 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:49:56.279576 kubelet[1392]: I0209 18:49:56.279554 1392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:49:56.279576 kubelet[1392]: I0209 18:49:56.279569 1392 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:49:56.280619 kubelet[1392]: E0209 18:49:56.280429 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f1993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279040403, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279040403, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.281254 kubelet[1392]: E0209 18:49:56.281193 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f4771", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279052145, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279052145, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.282050 kubelet[1392]: E0209 18:49:56.281996 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f51f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279054840, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279054840, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.282285 kubelet[1392]: I0209 18:49:56.282268 1392 policy_none.go:49] "None policy: Start" Feb 9 18:49:56.282895 kubelet[1392]: I0209 18:49:56.282881 1392 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:49:56.283004 kubelet[1392]: I0209 18:49:56.282993 1392 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:49:56.288481 systemd[1]: Created slice kubepods.slice. Feb 9 18:49:56.292215 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:49:56.294577 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:49:56.299578 kubelet[1392]: I0209 18:49:56.299534 1392 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:49:56.299752 kubelet[1392]: I0209 18:49:56.299736 1392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:49:56.300452 kubelet[1392]: E0209 18:49:56.300421 1392 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.31\" not found" Feb 9 18:49:56.302488 kubelet[1392]: E0209 18:49:56.302402 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b2466232e0b767", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 301166439, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 301166439, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.332076 kubelet[1392]: I0209 18:49:56.332029 1392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 18:49:56.332762 kubelet[1392]: I0209 18:49:56.332742 1392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 18:49:56.332762 kubelet[1392]: I0209 18:49:56.332761 1392 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 18:49:56.332931 kubelet[1392]: I0209 18:49:56.332777 1392 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 18:49:56.332931 kubelet[1392]: E0209 18:49:56.332811 1392 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:49:56.334264 kubelet[1392]: W0209 18:49:56.334242 1392 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:49:56.334264 kubelet[1392]: E0209 18:49:56.334266 1392 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:49:56.359027 kubelet[1392]: I0209 18:49:56.358998 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.31" Feb 9 18:49:56.359998 kubelet[1392]: E0209 18:49:56.359972 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.31" Feb 9 18:49:56.360529 kubelet[1392]: E0209 18:49:56.360433 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f1993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279040403, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 358962828, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f1993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.361381 kubelet[1392]: E0209 18:49:56.361320 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f4771", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279052145, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 358972065, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f4771" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.362087 kubelet[1392]: E0209 18:49:56.362044 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f51f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279054840, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 358974580, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f51f8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.467900 kubelet[1392]: E0209 18:49:56.467841 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 18:49:56.560753 kubelet[1392]: I0209 18:49:56.560723 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.31" Feb 9 18:49:56.562020 kubelet[1392]: E0209 18:49:56.561999 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.31" Feb 9 18:49:56.562098 kubelet[1392]: E0209 18:49:56.561995 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f1993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279040403, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 560649881, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f1993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.562822 kubelet[1392]: E0209 18:49:56.562767 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f4771", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279052145, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 560662204, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f4771" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.563473 kubelet[1392]: E0209 18:49:56.563377 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f51f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279054840, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 560697841, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f51f8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.869381 kubelet[1392]: E0209 18:49:56.869260 1392 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 18:49:56.963288 kubelet[1392]: I0209 18:49:56.963256 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.31" Feb 9 18:49:56.964426 kubelet[1392]: E0209 18:49:56.964384 1392 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.31" Feb 9 18:49:56.964426 kubelet[1392]: E0209 18:49:56.964356 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f1993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279040403, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 963201615, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f1993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.965163 kubelet[1392]: E0209 18:49:56.965108 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f4771", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279052145, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 963212145, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f4771" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:56.965758 kubelet[1392]: E0209 18:49:56.965709 1392 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.31.17b24662318f51f8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.31", UID:"10.0.0.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 279054840, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 49, 56, 963215541, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.31"}': 'events "10.0.0.31.17b24662318f51f8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:49:57.247037 kubelet[1392]: I0209 18:49:57.246893 1392 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:49:57.254174 kubelet[1392]: E0209 18:49:57.254119 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:49:57.623080 kubelet[1392]: E0209 18:49:57.623034 1392 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.31" not found Feb 9 18:49:57.673505 kubelet[1392]: E0209 18:49:57.673463 1392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.31\" not found" node="10.0.0.31" Feb 9 18:49:57.765516 kubelet[1392]: I0209 18:49:57.765483 1392 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.31" Feb 9 18:49:57.769143 kubelet[1392]: I0209 18:49:57.769121 1392 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.31" Feb 9 18:49:57.961800 kubelet[1392]: I0209 18:49:57.961675 1392 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:49:57.962326 env[1122]: time="2024-02-09T18:49:57.962268155Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:49:57.962696 kubelet[1392]: I0209 18:49:57.962500 1392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:49:57.969603 sudo[1213]: pam_unix(sudo:session): session closed for user root Feb 9 18:49:57.971377 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 9 18:49:57.974437 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:41406.service: Deactivated successfully. Feb 9 18:49:57.975075 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:49:57.975678 systemd-logind[1109]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:49:57.976609 systemd-logind[1109]: Removed session 5. Feb 9 18:49:58.254756 kubelet[1392]: E0209 18:49:58.254557 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:49:58.254756 kubelet[1392]: I0209 18:49:58.254620 1392 apiserver.go:52] "Watching apiserver" Feb 9 18:49:58.257353 kubelet[1392]: I0209 18:49:58.257317 1392 topology_manager.go:215] "Topology Admit Handler" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" podNamespace="kube-system" podName="cilium-4q5ql" Feb 9 18:49:58.258237 kubelet[1392]: I0209 18:49:58.257463 1392 topology_manager.go:215] "Topology Admit Handler" podUID="103366fa-6508-4530-8d29-857307a94c84" podNamespace="kube-system" podName="kube-proxy-nlmzh" Feb 9 18:49:58.259768 kubelet[1392]: I0209 18:49:58.259317 1392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:49:58.264481 systemd[1]: Created slice kubepods-besteffort-pod103366fa_6508_4530_8d29_857307a94c84.slice. Feb 9 18:49:58.271666 kubelet[1392]: I0209 18:49:58.271614 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-kernel\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.271666 kubelet[1392]: I0209 18:49:58.271664 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-hubble-tls\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.271822 kubelet[1392]: I0209 18:49:58.271698 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrmn\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-kube-api-access-jsrmn\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.271822 kubelet[1392]: I0209 18:49:58.271725 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/103366fa-6508-4530-8d29-857307a94c84-kube-proxy\") pod \"kube-proxy-nlmzh\" (UID: \"103366fa-6508-4530-8d29-857307a94c84\") " pod="kube-system/kube-proxy-nlmzh" Feb 9 18:49:58.271822 kubelet[1392]: I0209 18:49:58.271750 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/103366fa-6508-4530-8d29-857307a94c84-xtables-lock\") pod \"kube-proxy-nlmzh\" (UID: \"103366fa-6508-4530-8d29-857307a94c84\") " pod="kube-system/kube-proxy-nlmzh" Feb 9 18:49:58.271822 kubelet[1392]: I0209 18:49:58.271775 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/103366fa-6508-4530-8d29-857307a94c84-lib-modules\") pod \"kube-proxy-nlmzh\" (UID: \"103366fa-6508-4530-8d29-857307a94c84\") " pod="kube-system/kube-proxy-nlmzh" Feb 9 18:49:58.271822 kubelet[1392]: I0209 18:49:58.271819 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cni-path\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.271952 kubelet[1392]: I0209 18:49:58.271888 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-xtables-lock\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.271952 kubelet[1392]: I0209 18:49:58.271937 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-etc-cni-netd\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272142 kubelet[1392]: I0209 18:49:58.272119 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16358afd-f262-4a8e-8b8d-154c71a46ed4-clustermesh-secrets\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272177 kubelet[1392]: I0209 18:49:58.272158 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-net\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272201 kubelet[1392]: I0209 18:49:58.272178 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-hostproc\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272223 kubelet[1392]: I0209 18:49:58.272203 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-cgroup\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272245 kubelet[1392]: I0209 18:49:58.272235 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-config-path\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272289 kubelet[1392]: I0209 18:49:58.272277 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-run\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272333 kubelet[1392]: I0209 18:49:58.272309 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-bpf-maps\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272366 kubelet[1392]: I0209 18:49:58.272358 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-lib-modules\") pod \"cilium-4q5ql\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " pod="kube-system/cilium-4q5ql" Feb 9 18:49:58.272427 kubelet[1392]: I0209 18:49:58.272406 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zklvl\" (UniqueName: \"kubernetes.io/projected/103366fa-6508-4530-8d29-857307a94c84-kube-api-access-zklvl\") pod \"kube-proxy-nlmzh\" (UID: \"103366fa-6508-4530-8d29-857307a94c84\") " pod="kube-system/kube-proxy-nlmzh" Feb 9 18:49:58.277595 systemd[1]: Created slice kubepods-burstable-pod16358afd_f262_4a8e_8b8d_154c71a46ed4.slice. Feb 9 18:49:58.575775 kubelet[1392]: E0209 18:49:58.575748 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:49:58.576462 env[1122]: time="2024-02-09T18:49:58.576423135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlmzh,Uid:103366fa-6508-4530-8d29-857307a94c84,Namespace:kube-system,Attempt:0,}" Feb 9 18:49:58.587101 kubelet[1392]: E0209 18:49:58.587051 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:49:58.587486 env[1122]: time="2024-02-09T18:49:58.587460140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q5ql,Uid:16358afd-f262-4a8e-8b8d-154c71a46ed4,Namespace:kube-system,Attempt:0,}" Feb 9 18:49:59.254903 kubelet[1392]: E0209 18:49:59.254865 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:00.255672 kubelet[1392]: E0209 18:50:00.255631 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:01.256748 kubelet[1392]: E0209 18:50:01.256702 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:01.983716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111430852.mount: Deactivated successfully. Feb 9 18:50:01.989031 env[1122]: time="2024-02-09T18:50:01.988963487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.991371 env[1122]: time="2024-02-09T18:50:01.991304266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.992750 env[1122]: time="2024-02-09T18:50:01.992702318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.993914 env[1122]: time="2024-02-09T18:50:01.993890075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.995449 env[1122]: time="2024-02-09T18:50:01.995415635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.997058 env[1122]: time="2024-02-09T18:50:01.997017709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:01.999564 env[1122]: time="2024-02-09T18:50:01.999529579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:02.000131 env[1122]: time="2024-02-09T18:50:02.000104377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:02.017212 env[1122]: time="2024-02-09T18:50:02.017141972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:02.017212 env[1122]: time="2024-02-09T18:50:02.017189782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:02.017212 env[1122]: time="2024-02-09T18:50:02.017204429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:02.017487 env[1122]: time="2024-02-09T18:50:02.017429351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c506768f605f62d858a0f9c149bab988b4331aede344804f8241f60bec442735 pid=1445 runtime=io.containerd.runc.v2 Feb 9 18:50:02.023476 env[1122]: time="2024-02-09T18:50:02.023407300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:02.023476 env[1122]: time="2024-02-09T18:50:02.023446934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:02.023733 env[1122]: time="2024-02-09T18:50:02.023692775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:02.024083 env[1122]: time="2024-02-09T18:50:02.024018646Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca pid=1462 runtime=io.containerd.runc.v2 Feb 9 18:50:02.035808 systemd[1]: Started cri-containerd-c506768f605f62d858a0f9c149bab988b4331aede344804f8241f60bec442735.scope. Feb 9 18:50:02.036821 systemd[1]: Started cri-containerd-e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca.scope. Feb 9 18:50:02.060984 env[1122]: time="2024-02-09T18:50:02.060942859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nlmzh,Uid:103366fa-6508-4530-8d29-857307a94c84,Namespace:kube-system,Attempt:0,} returns sandbox id \"c506768f605f62d858a0f9c149bab988b4331aede344804f8241f60bec442735\"" Feb 9 18:50:02.061350 env[1122]: time="2024-02-09T18:50:02.061320016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q5ql,Uid:16358afd-f262-4a8e-8b8d-154c71a46ed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\"" Feb 9 18:50:02.061819 kubelet[1392]: E0209 18:50:02.061801 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:02.062813 kubelet[1392]: E0209 18:50:02.062798 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:02.063283 env[1122]: time="2024-02-09T18:50:02.063257168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 18:50:02.256936 kubelet[1392]: E0209 18:50:02.256816 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:03.257782 kubelet[1392]: E0209 18:50:03.257752 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:03.513009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224178454.mount: Deactivated successfully. Feb 9 18:50:04.035136 env[1122]: time="2024-02-09T18:50:04.035090826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:04.036518 env[1122]: time="2024-02-09T18:50:04.036497864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:04.037972 env[1122]: time="2024-02-09T18:50:04.037948203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:04.039017 env[1122]: time="2024-02-09T18:50:04.038996799Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:04.039308 env[1122]: time="2024-02-09T18:50:04.039290490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 18:50:04.040083 env[1122]: time="2024-02-09T18:50:04.040051617Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:50:04.040729 env[1122]: time="2024-02-09T18:50:04.040707256Z" level=info msg="CreateContainer within sandbox \"c506768f605f62d858a0f9c149bab988b4331aede344804f8241f60bec442735\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:50:04.052891 env[1122]: time="2024-02-09T18:50:04.052856887Z" level=info msg="CreateContainer within sandbox \"c506768f605f62d858a0f9c149bab988b4331aede344804f8241f60bec442735\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17f98287853d8dc5e1541ceed6e70d1d153a4427ee6fa899f79abacf0397f1f1\"" Feb 9 18:50:04.053227 env[1122]: time="2024-02-09T18:50:04.053208817Z" level=info msg="StartContainer for \"17f98287853d8dc5e1541ceed6e70d1d153a4427ee6fa899f79abacf0397f1f1\"" Feb 9 18:50:04.068826 systemd[1]: Started cri-containerd-17f98287853d8dc5e1541ceed6e70d1d153a4427ee6fa899f79abacf0397f1f1.scope. Feb 9 18:50:04.091406 env[1122]: time="2024-02-09T18:50:04.090564940Z" level=info msg="StartContainer for \"17f98287853d8dc5e1541ceed6e70d1d153a4427ee6fa899f79abacf0397f1f1\" returns successfully" Feb 9 18:50:04.257881 kubelet[1392]: E0209 18:50:04.257860 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:04.346312 kubelet[1392]: E0209 18:50:04.346238 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:04.353108 kubelet[1392]: I0209 18:50:04.353094 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nlmzh" podStartSLOduration=5.375983139 podCreationTimestamp="2024-02-09 18:49:57 +0000 UTC" firstStartedPulling="2024-02-09 18:50:02.062515457 +0000 UTC m=+6.160381975" lastFinishedPulling="2024-02-09 18:50:04.039598417 +0000 UTC m=+8.137464935" observedRunningTime="2024-02-09 18:50:04.35285337 +0000 UTC m=+8.450719898" watchObservedRunningTime="2024-02-09 18:50:04.353066099 +0000 UTC m=+8.450932617" Feb 9 18:50:05.258975 kubelet[1392]: E0209 18:50:05.258903 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:05.346826 kubelet[1392]: E0209 18:50:05.346799 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:06.259168 kubelet[1392]: E0209 18:50:06.259124 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:07.260300 kubelet[1392]: E0209 18:50:07.260217 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:08.261071 kubelet[1392]: E0209 18:50:08.261015 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:08.747282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96634674.mount: Deactivated successfully. Feb 9 18:50:09.261981 kubelet[1392]: E0209 18:50:09.261924 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:10.262879 kubelet[1392]: E0209 18:50:10.262848 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:11.263882 kubelet[1392]: E0209 18:50:11.263823 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:12.264997 kubelet[1392]: E0209 18:50:12.264959 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:12.761317 env[1122]: time="2024-02-09T18:50:12.761262505Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:12.763254 env[1122]: time="2024-02-09T18:50:12.763195029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:12.764944 env[1122]: time="2024-02-09T18:50:12.764911096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:12.765483 env[1122]: time="2024-02-09T18:50:12.765447803Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 18:50:12.767237 env[1122]: time="2024-02-09T18:50:12.767178558Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:50:12.779140 env[1122]: time="2024-02-09T18:50:12.779067379Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\"" Feb 9 18:50:12.779613 env[1122]: time="2024-02-09T18:50:12.779588406Z" level=info msg="StartContainer for \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\"" Feb 9 18:50:12.793651 systemd[1]: run-containerd-runc-k8s.io-f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991-runc.RrcRPX.mount: Deactivated successfully. Feb 9 18:50:12.796511 systemd[1]: Started cri-containerd-f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991.scope. Feb 9 18:50:12.821843 env[1122]: time="2024-02-09T18:50:12.821789223Z" level=info msg="StartContainer for \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\" returns successfully" Feb 9 18:50:12.826947 systemd[1]: cri-containerd-f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991.scope: Deactivated successfully. Feb 9 18:50:13.265979 kubelet[1392]: E0209 18:50:13.265931 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:13.359869 kubelet[1392]: E0209 18:50:13.359849 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:13.529004 env[1122]: time="2024-02-09T18:50:13.528889436Z" level=info msg="shim disconnected" id=f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991 Feb 9 18:50:13.529004 env[1122]: time="2024-02-09T18:50:13.528936484Z" level=warning msg="cleaning up after shim disconnected" id=f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991 namespace=k8s.io Feb 9 18:50:13.529004 env[1122]: time="2024-02-09T18:50:13.528945841Z" level=info msg="cleaning up dead shim" Feb 9 18:50:13.535157 env[1122]: time="2024-02-09T18:50:13.535119016Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1727 runtime=io.containerd.runc.v2\n" Feb 9 18:50:13.774070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991-rootfs.mount: Deactivated successfully. Feb 9 18:50:14.266665 kubelet[1392]: E0209 18:50:14.266633 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:14.362505 kubelet[1392]: E0209 18:50:14.362489 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:14.364067 env[1122]: time="2024-02-09T18:50:14.364033180Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:50:14.378360 env[1122]: time="2024-02-09T18:50:14.378292226Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\"" Feb 9 18:50:14.378811 env[1122]: time="2024-02-09T18:50:14.378762978Z" level=info msg="StartContainer for \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\"" Feb 9 18:50:14.394363 systemd[1]: Started cri-containerd-d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c.scope. Feb 9 18:50:14.417252 env[1122]: time="2024-02-09T18:50:14.417204286Z" level=info msg="StartContainer for \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\" returns successfully" Feb 9 18:50:14.424352 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:50:14.424630 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:50:14.424822 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:50:14.426448 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:50:14.427630 systemd[1]: cri-containerd-d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c.scope: Deactivated successfully. Feb 9 18:50:14.435381 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:50:14.448318 env[1122]: time="2024-02-09T18:50:14.448269072Z" level=info msg="shim disconnected" id=d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c Feb 9 18:50:14.448318 env[1122]: time="2024-02-09T18:50:14.448314998Z" level=warning msg="cleaning up after shim disconnected" id=d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c namespace=k8s.io Feb 9 18:50:14.448477 env[1122]: time="2024-02-09T18:50:14.448326159Z" level=info msg="cleaning up dead shim" Feb 9 18:50:14.455130 env[1122]: time="2024-02-09T18:50:14.455103517Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1791 runtime=io.containerd.runc.v2\n" Feb 9 18:50:14.774341 systemd[1]: run-containerd-runc-k8s.io-d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c-runc.GBIsb6.mount: Deactivated successfully. Feb 9 18:50:14.774494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c-rootfs.mount: Deactivated successfully. Feb 9 18:50:15.266799 kubelet[1392]: E0209 18:50:15.266736 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:15.366310 kubelet[1392]: E0209 18:50:15.366278 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:15.368021 env[1122]: time="2024-02-09T18:50:15.367983476Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:50:15.388092 env[1122]: time="2024-02-09T18:50:15.388026627Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\"" Feb 9 18:50:15.388652 env[1122]: time="2024-02-09T18:50:15.388616012Z" level=info msg="StartContainer for \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\"" Feb 9 18:50:15.405631 systemd[1]: Started cri-containerd-69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d.scope. Feb 9 18:50:15.431894 systemd[1]: cri-containerd-69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d.scope: Deactivated successfully. Feb 9 18:50:15.584736 env[1122]: time="2024-02-09T18:50:15.584692886Z" level=info msg="StartContainer for \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\" returns successfully" Feb 9 18:50:15.773946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d-rootfs.mount: Deactivated successfully. Feb 9 18:50:15.778163 env[1122]: time="2024-02-09T18:50:15.778115023Z" level=info msg="shim disconnected" id=69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d Feb 9 18:50:15.778240 env[1122]: time="2024-02-09T18:50:15.778170537Z" level=warning msg="cleaning up after shim disconnected" id=69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d namespace=k8s.io Feb 9 18:50:15.778240 env[1122]: time="2024-02-09T18:50:15.778181698Z" level=info msg="cleaning up dead shim" Feb 9 18:50:15.784608 env[1122]: time="2024-02-09T18:50:15.784547143Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1846 runtime=io.containerd.runc.v2\n" Feb 9 18:50:16.253813 kubelet[1392]: E0209 18:50:16.253773 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:16.267162 kubelet[1392]: E0209 18:50:16.267137 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:16.369023 kubelet[1392]: E0209 18:50:16.368991 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:16.370567 env[1122]: time="2024-02-09T18:50:16.370534299Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:50:16.385890 env[1122]: time="2024-02-09T18:50:16.385836621Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\"" Feb 9 18:50:16.386416 env[1122]: time="2024-02-09T18:50:16.386393365Z" level=info msg="StartContainer for \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\"" Feb 9 18:50:16.399810 systemd[1]: Started cri-containerd-ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5.scope. Feb 9 18:50:16.420525 systemd[1]: cri-containerd-ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5.scope: Deactivated successfully. Feb 9 18:50:16.422417 env[1122]: time="2024-02-09T18:50:16.422378627Z" level=info msg="StartContainer for \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\" returns successfully" Feb 9 18:50:16.439700 env[1122]: time="2024-02-09T18:50:16.439646855Z" level=info msg="shim disconnected" id=ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5 Feb 9 18:50:16.439826 env[1122]: time="2024-02-09T18:50:16.439702840Z" level=warning msg="cleaning up after shim disconnected" id=ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5 namespace=k8s.io Feb 9 18:50:16.439826 env[1122]: time="2024-02-09T18:50:16.439716405Z" level=info msg="cleaning up dead shim" Feb 9 18:50:16.446661 env[1122]: time="2024-02-09T18:50:16.446612536Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1900 runtime=io.containerd.runc.v2\n" Feb 9 18:50:16.774051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5-rootfs.mount: Deactivated successfully. Feb 9 18:50:17.267500 kubelet[1392]: E0209 18:50:17.267449 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:17.371921 kubelet[1392]: E0209 18:50:17.371896 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:17.373646 env[1122]: time="2024-02-09T18:50:17.373604666Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:50:17.389806 env[1122]: time="2024-02-09T18:50:17.389757953Z" level=info msg="CreateContainer within sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\"" Feb 9 18:50:17.390253 env[1122]: time="2024-02-09T18:50:17.390213918Z" level=info msg="StartContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\"" Feb 9 18:50:17.405281 systemd[1]: Started cri-containerd-4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b.scope. Feb 9 18:50:17.426918 env[1122]: time="2024-02-09T18:50:17.426862223Z" level=info msg="StartContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" returns successfully" Feb 9 18:50:17.563959 kubelet[1392]: I0209 18:50:17.563943 1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:50:17.692611 kernel: Initializing XFRM netlink socket Feb 9 18:50:17.774088 systemd[1]: run-containerd-runc-k8s.io-4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b-runc.QjPvEL.mount: Deactivated successfully. Feb 9 18:50:18.268522 kubelet[1392]: E0209 18:50:18.268466 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:18.376742 kubelet[1392]: E0209 18:50:18.376704 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:18.390781 kubelet[1392]: I0209 18:50:18.390753 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4q5ql" podStartSLOduration=10.688547485 podCreationTimestamp="2024-02-09 18:49:57 +0000 UTC" firstStartedPulling="2024-02-09 18:50:02.06359423 +0000 UTC m=+6.161460748" lastFinishedPulling="2024-02-09 18:50:12.765762463 +0000 UTC m=+16.863628981" observedRunningTime="2024-02-09 18:50:18.390174534 +0000 UTC m=+22.488041082" watchObservedRunningTime="2024-02-09 18:50:18.390715718 +0000 UTC m=+22.488582237" Feb 9 18:50:19.269651 kubelet[1392]: E0209 18:50:19.269577 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:19.297281 systemd-networkd[1026]: cilium_host: Link UP Feb 9 18:50:19.297421 systemd-networkd[1026]: cilium_net: Link UP Feb 9 18:50:19.297424 systemd-networkd[1026]: cilium_net: Gained carrier Feb 9 18:50:19.297576 systemd-networkd[1026]: cilium_host: Gained carrier Feb 9 18:50:19.310734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:50:19.300701 systemd-networkd[1026]: cilium_host: Gained IPv6LL Feb 9 18:50:19.368641 systemd-networkd[1026]: cilium_vxlan: Link UP Feb 9 18:50:19.368650 systemd-networkd[1026]: cilium_vxlan: Gained carrier Feb 9 18:50:19.378352 kubelet[1392]: E0209 18:50:19.378326 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:19.539613 kernel: NET: Registered PF_ALG protocol family Feb 9 18:50:19.848536 kubelet[1392]: I0209 18:50:19.848418 1392 topology_manager.go:215] "Topology Admit Handler" podUID="255d28b1-de71-4dda-a30d-c41521f644ce" podNamespace="default" podName="nginx-deployment-6d5f899847-h68vx" Feb 9 18:50:19.852376 systemd[1]: Created slice kubepods-besteffort-pod255d28b1_de71_4dda_a30d_c41521f644ce.slice. Feb 9 18:50:19.892433 kubelet[1392]: I0209 18:50:19.892404 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snvg\" (UniqueName: \"kubernetes.io/projected/255d28b1-de71-4dda-a30d-c41521f644ce-kube-api-access-4snvg\") pod \"nginx-deployment-6d5f899847-h68vx\" (UID: \"255d28b1-de71-4dda-a30d-c41521f644ce\") " pod="default/nginx-deployment-6d5f899847-h68vx" Feb 9 18:50:19.993967 systemd-networkd[1026]: lxc_health: Link UP Feb 9 18:50:20.007611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:50:20.007875 systemd-networkd[1026]: lxc_health: Gained carrier Feb 9 18:50:20.154637 env[1122]: time="2024-02-09T18:50:20.154513880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-h68vx,Uid:255d28b1-de71-4dda-a30d-c41521f644ce,Namespace:default,Attempt:0,}" Feb 9 18:50:20.182480 systemd-networkd[1026]: lxc1c37ab6e87d4: Link UP Feb 9 18:50:20.190693 kernel: eth0: renamed from tmp16bfc Feb 9 18:50:20.196981 systemd-networkd[1026]: cilium_net: Gained IPv6LL Feb 9 18:50:20.200617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1c37ab6e87d4: link becomes ready Feb 9 18:50:20.204085 systemd-networkd[1026]: lxc1c37ab6e87d4: Gained carrier Feb 9 18:50:20.269767 kubelet[1392]: E0209 18:50:20.269715 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:20.379970 kubelet[1392]: E0209 18:50:20.379936 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:20.896696 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL Feb 9 18:50:21.270453 kubelet[1392]: E0209 18:50:21.270317 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:21.290214 systemd-networkd[1026]: lxc_health: Gained IPv6LL Feb 9 18:50:21.381703 kubelet[1392]: E0209 18:50:21.381668 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:21.984720 systemd-networkd[1026]: lxc1c37ab6e87d4: Gained IPv6LL Feb 9 18:50:22.270956 kubelet[1392]: E0209 18:50:22.270820 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:23.271637 kubelet[1392]: E0209 18:50:23.271594 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:23.443615 env[1122]: time="2024-02-09T18:50:23.443538360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:23.443615 env[1122]: time="2024-02-09T18:50:23.443579497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:23.443615 env[1122]: time="2024-02-09T18:50:23.443599995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:23.443964 env[1122]: time="2024-02-09T18:50:23.443719189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16bfc37df9bb6e141396efd605c4bdeb22e08d3865f60d54e9a7e4a21bd8bca5 pid=2453 runtime=io.containerd.runc.v2 Feb 9 18:50:23.456989 systemd[1]: Started cri-containerd-16bfc37df9bb6e141396efd605c4bdeb22e08d3865f60d54e9a7e4a21bd8bca5.scope. Feb 9 18:50:23.468531 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:50:23.495266 env[1122]: time="2024-02-09T18:50:23.495214282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-h68vx,Uid:255d28b1-de71-4dda-a30d-c41521f644ce,Namespace:default,Attempt:0,} returns sandbox id \"16bfc37df9bb6e141396efd605c4bdeb22e08d3865f60d54e9a7e4a21bd8bca5\"" Feb 9 18:50:23.496666 env[1122]: time="2024-02-09T18:50:23.496646247Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:50:24.272703 kubelet[1392]: E0209 18:50:24.272647 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:25.273774 kubelet[1392]: E0209 18:50:25.273723 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:26.274407 kubelet[1392]: E0209 18:50:26.274372 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:26.619247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019382053.mount: Deactivated successfully. Feb 9 18:50:26.738104 kubelet[1392]: I0209 18:50:26.738060 1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:50:26.738797 kubelet[1392]: E0209 18:50:26.738776 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:27.274940 kubelet[1392]: E0209 18:50:27.274887 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:27.391090 kubelet[1392]: E0209 18:50:27.391061 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:27.439562 env[1122]: time="2024-02-09T18:50:27.439495320Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.441303 env[1122]: time="2024-02-09T18:50:27.441241759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.442990 env[1122]: time="2024-02-09T18:50:27.442965684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.444625 env[1122]: time="2024-02-09T18:50:27.444592373Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:27.445094 env[1122]: time="2024-02-09T18:50:27.445063827Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:50:27.446527 env[1122]: time="2024-02-09T18:50:27.446498437Z" level=info msg="CreateContainer within sandbox \"16bfc37df9bb6e141396efd605c4bdeb22e08d3865f60d54e9a7e4a21bd8bca5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:50:27.457887 env[1122]: time="2024-02-09T18:50:27.457831978Z" level=info msg="CreateContainer within sandbox \"16bfc37df9bb6e141396efd605c4bdeb22e08d3865f60d54e9a7e4a21bd8bca5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e1265a2c426fede00f62c46f929279393ad03e93c1cf0d0a69a12cae4e64014e\"" Feb 9 18:50:27.458506 env[1122]: time="2024-02-09T18:50:27.458435414Z" level=info msg="StartContainer for \"e1265a2c426fede00f62c46f929279393ad03e93c1cf0d0a69a12cae4e64014e\"" Feb 9 18:50:27.474367 systemd[1]: Started cri-containerd-e1265a2c426fede00f62c46f929279393ad03e93c1cf0d0a69a12cae4e64014e.scope. Feb 9 18:50:27.496345 env[1122]: time="2024-02-09T18:50:27.496009498Z" level=info msg="StartContainer for \"e1265a2c426fede00f62c46f929279393ad03e93c1cf0d0a69a12cae4e64014e\" returns successfully" Feb 9 18:50:27.619323 systemd[1]: run-containerd-runc-k8s.io-e1265a2c426fede00f62c46f929279393ad03e93c1cf0d0a69a12cae4e64014e-runc.30tDQU.mount: Deactivated successfully. Feb 9 18:50:28.275353 kubelet[1392]: E0209 18:50:28.275310 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:28.518559 kubelet[1392]: I0209 18:50:28.518521 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-h68vx" podStartSLOduration=5.569573381 podCreationTimestamp="2024-02-09 18:50:19 +0000 UTC" firstStartedPulling="2024-02-09 18:50:23.496404874 +0000 UTC m=+27.594271382" lastFinishedPulling="2024-02-09 18:50:27.445311802 +0000 UTC m=+31.543178320" observedRunningTime="2024-02-09 18:50:28.518332627 +0000 UTC m=+32.616199145" watchObservedRunningTime="2024-02-09 18:50:28.518480319 +0000 UTC m=+32.616346848" Feb 9 18:50:29.276438 kubelet[1392]: E0209 18:50:29.276376 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:30.276718 kubelet[1392]: E0209 18:50:30.276672 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:31.275102 kubelet[1392]: I0209 18:50:31.275065 1392 topology_manager.go:215] "Topology Admit Handler" podUID="fc86b718-383a-406a-90b1-770ddc8f1318" podNamespace="default" podName="nfs-server-provisioner-0" Feb 9 18:50:31.276807 kubelet[1392]: E0209 18:50:31.276779 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:31.279086 systemd[1]: Created slice kubepods-besteffort-podfc86b718_383a_406a_90b1_770ddc8f1318.slice. Feb 9 18:50:31.345123 kubelet[1392]: I0209 18:50:31.345082 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9wm\" (UniqueName: \"kubernetes.io/projected/fc86b718-383a-406a-90b1-770ddc8f1318-kube-api-access-bs9wm\") pod \"nfs-server-provisioner-0\" (UID: \"fc86b718-383a-406a-90b1-770ddc8f1318\") " pod="default/nfs-server-provisioner-0" Feb 9 18:50:31.345288 kubelet[1392]: I0209 18:50:31.345141 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fc86b718-383a-406a-90b1-770ddc8f1318-data\") pod \"nfs-server-provisioner-0\" (UID: \"fc86b718-383a-406a-90b1-770ddc8f1318\") " pod="default/nfs-server-provisioner-0" Feb 9 18:50:31.581303 env[1122]: time="2024-02-09T18:50:31.581256279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fc86b718-383a-406a-90b1-770ddc8f1318,Namespace:default,Attempt:0,}" Feb 9 18:50:32.208820 systemd-networkd[1026]: lxc550d711a5f99: Link UP Feb 9 18:50:32.215616 kernel: eth0: renamed from tmp99a42 Feb 9 18:50:32.220187 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:50:32.220245 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc550d711a5f99: link becomes ready Feb 9 18:50:32.220324 systemd-networkd[1026]: lxc550d711a5f99: Gained carrier Feb 9 18:50:32.277744 kubelet[1392]: E0209 18:50:32.277699 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:32.401952 env[1122]: time="2024-02-09T18:50:32.401879990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:32.401952 env[1122]: time="2024-02-09T18:50:32.401924093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:32.402138 env[1122]: time="2024-02-09T18:50:32.401943621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:32.402138 env[1122]: time="2024-02-09T18:50:32.402068389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7 pid=2583 runtime=io.containerd.runc.v2 Feb 9 18:50:32.416344 systemd[1]: Started cri-containerd-99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7.scope. Feb 9 18:50:32.427102 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:50:32.448494 env[1122]: time="2024-02-09T18:50:32.448451059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fc86b718-383a-406a-90b1-770ddc8f1318,Namespace:default,Attempt:0,} returns sandbox id \"99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7\"" Feb 9 18:50:32.449645 env[1122]: time="2024-02-09T18:50:32.449625336Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:50:32.454755 systemd[1]: run-containerd-runc-k8s.io-99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7-runc.LcxBSZ.mount: Deactivated successfully. Feb 9 18:50:33.278110 kubelet[1392]: E0209 18:50:33.278060 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:34.144749 systemd-networkd[1026]: lxc550d711a5f99: Gained IPv6LL Feb 9 18:50:34.278691 kubelet[1392]: E0209 18:50:34.278643 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:35.279539 kubelet[1392]: E0209 18:50:35.279487 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:36.253976 kubelet[1392]: E0209 18:50:36.253937 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:36.280271 kubelet[1392]: E0209 18:50:36.280216 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:36.577259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848365928.mount: Deactivated successfully. Feb 9 18:50:37.281414 kubelet[1392]: E0209 18:50:37.281359 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:37.884157 update_engine[1113]: I0209 18:50:37.884095 1113 update_attempter.cc:509] Updating boot flags... Feb 9 18:50:38.282040 kubelet[1392]: E0209 18:50:38.282001 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:39.282329 kubelet[1392]: E0209 18:50:39.282266 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:39.318782 env[1122]: time="2024-02-09T18:50:39.318725307Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:39.320702 env[1122]: time="2024-02-09T18:50:39.320650824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:39.322318 env[1122]: time="2024-02-09T18:50:39.322291893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:39.323825 env[1122]: time="2024-02-09T18:50:39.323804329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:39.324397 env[1122]: time="2024-02-09T18:50:39.324366173Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 18:50:39.325962 env[1122]: time="2024-02-09T18:50:39.325919796Z" level=info msg="CreateContainer within sandbox \"99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:50:39.337186 env[1122]: time="2024-02-09T18:50:39.337144422Z" level=info msg="CreateContainer within sandbox \"99a42614641095646743a8f28cd7669f67c0376e736f99a4412fa5d84477dce7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"293e0d5a3faa5c0f582df952865976a8a5ed4d55fc92b6180533e335422cbc2b\"" Feb 9 18:50:39.337642 env[1122]: time="2024-02-09T18:50:39.337604213Z" level=info msg="StartContainer for \"293e0d5a3faa5c0f582df952865976a8a5ed4d55fc92b6180533e335422cbc2b\"" Feb 9 18:50:39.352547 systemd[1]: Started cri-containerd-293e0d5a3faa5c0f582df952865976a8a5ed4d55fc92b6180533e335422cbc2b.scope. Feb 9 18:50:39.373125 env[1122]: time="2024-02-09T18:50:39.373080544Z" level=info msg="StartContainer for \"293e0d5a3faa5c0f582df952865976a8a5ed4d55fc92b6180533e335422cbc2b\" returns successfully" Feb 9 18:50:39.420823 kubelet[1392]: I0209 18:50:39.420778 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.5455095490000001 podCreationTimestamp="2024-02-09 18:50:31 +0000 UTC" firstStartedPulling="2024-02-09 18:50:32.44939624 +0000 UTC m=+36.547262748" lastFinishedPulling="2024-02-09 18:50:39.324630945 +0000 UTC m=+43.422497463" observedRunningTime="2024-02-09 18:50:39.420481596 +0000 UTC m=+43.518348114" watchObservedRunningTime="2024-02-09 18:50:39.420744264 +0000 UTC m=+43.518610772" Feb 9 18:50:40.282489 kubelet[1392]: E0209 18:50:40.282424 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:41.282773 kubelet[1392]: E0209 18:50:41.282699 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:42.283359 kubelet[1392]: E0209 18:50:42.283301 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:43.283824 kubelet[1392]: E0209 18:50:43.283782 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:44.284220 kubelet[1392]: E0209 18:50:44.284171 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:45.284535 kubelet[1392]: E0209 18:50:45.284452 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:46.285571 kubelet[1392]: E0209 18:50:46.285523 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:47.286471 kubelet[1392]: E0209 18:50:47.286409 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:48.287387 kubelet[1392]: E0209 18:50:48.287330 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:48.720477 kubelet[1392]: I0209 18:50:48.720321 1392 topology_manager.go:215] "Topology Admit Handler" podUID="926d980b-6d33-4e8f-bfb8-95c21a860bf7" podNamespace="default" podName="test-pod-1" Feb 9 18:50:48.726845 systemd[1]: Created slice kubepods-besteffort-pod926d980b_6d33_4e8f_bfb8_95c21a860bf7.slice. Feb 9 18:50:49.127743 kubelet[1392]: I0209 18:50:49.127694 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b4f1af8-8947-4d9e-af24-a13a6be24626\" (UniqueName: \"kubernetes.io/nfs/926d980b-6d33-4e8f-bfb8-95c21a860bf7-pvc-2b4f1af8-8947-4d9e-af24-a13a6be24626\") pod \"test-pod-1\" (UID: \"926d980b-6d33-4e8f-bfb8-95c21a860bf7\") " pod="default/test-pod-1" Feb 9 18:50:49.127743 kubelet[1392]: I0209 18:50:49.127738 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n5qw\" (UniqueName: \"kubernetes.io/projected/926d980b-6d33-4e8f-bfb8-95c21a860bf7-kube-api-access-6n5qw\") pod \"test-pod-1\" (UID: \"926d980b-6d33-4e8f-bfb8-95c21a860bf7\") " pod="default/test-pod-1" Feb 9 18:50:49.250625 kernel: FS-Cache: Loaded Feb 9 18:50:49.284672 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:50:49.284791 kernel: RPC: Registered udp transport module. Feb 9 18:50:49.284817 kernel: RPC: Registered tcp transport module. Feb 9 18:50:49.284832 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:50:49.287933 kubelet[1392]: E0209 18:50:49.287907 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:49.348623 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:50:49.520678 kernel: NFS: Registering the id_resolver key type Feb 9 18:50:49.520833 kernel: Key type id_resolver registered Feb 9 18:50:49.520851 kernel: Key type id_legacy registered Feb 9 18:50:49.540941 nfsidmap[2717]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:50:49.544388 nfsidmap[2720]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:50:49.629888 env[1122]: time="2024-02-09T18:50:49.629819790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:926d980b-6d33-4e8f-bfb8-95c21a860bf7,Namespace:default,Attempt:0,}" Feb 9 18:50:49.723730 systemd-networkd[1026]: lxc86c9411c9d3a: Link UP Feb 9 18:50:49.732609 kernel: eth0: renamed from tmpf1b39 Feb 9 18:50:49.740937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:50:49.741064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc86c9411c9d3a: link becomes ready Feb 9 18:50:49.740994 systemd-networkd[1026]: lxc86c9411c9d3a: Gained carrier Feb 9 18:50:50.061407 env[1122]: time="2024-02-09T18:50:50.061337033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:50.061574 env[1122]: time="2024-02-09T18:50:50.061380336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:50.061574 env[1122]: time="2024-02-09T18:50:50.061392248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:50.061574 env[1122]: time="2024-02-09T18:50:50.061514328Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1b3979961481a4d4189500d7e1505b10e3f09ed418635ffb6aedd6dcfa52ec4 pid=2756 runtime=io.containerd.runc.v2 Feb 9 18:50:50.071972 systemd[1]: Started cri-containerd-f1b3979961481a4d4189500d7e1505b10e3f09ed418635ffb6aedd6dcfa52ec4.scope. Feb 9 18:50:50.083737 systemd-resolved[1069]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:50:50.105113 env[1122]: time="2024-02-09T18:50:50.105057419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:926d980b-6d33-4e8f-bfb8-95c21a860bf7,Namespace:default,Attempt:0,} returns sandbox id \"f1b3979961481a4d4189500d7e1505b10e3f09ed418635ffb6aedd6dcfa52ec4\"" Feb 9 18:50:50.106439 env[1122]: time="2024-02-09T18:50:50.106388478Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:50:50.288134 kubelet[1392]: E0209 18:50:50.288070 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:50.723978 env[1122]: time="2024-02-09T18:50:50.723932061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:50.725489 env[1122]: time="2024-02-09T18:50:50.725443248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:50.726891 env[1122]: time="2024-02-09T18:50:50.726860470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:50.728246 env[1122]: time="2024-02-09T18:50:50.728219340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:50:50.728816 env[1122]: time="2024-02-09T18:50:50.728788844Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:50:50.730215 env[1122]: time="2024-02-09T18:50:50.730187359Z" level=info msg="CreateContainer within sandbox \"f1b3979961481a4d4189500d7e1505b10e3f09ed418635ffb6aedd6dcfa52ec4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:50:50.743923 env[1122]: time="2024-02-09T18:50:50.743881488Z" level=info msg="CreateContainer within sandbox \"f1b3979961481a4d4189500d7e1505b10e3f09ed418635ffb6aedd6dcfa52ec4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"34e60d1ec392ede11469327c7a9145e53e7bab6670f8eb586c8eb45f58968cf0\"" Feb 9 18:50:50.744355 env[1122]: time="2024-02-09T18:50:50.744320926Z" level=info msg="StartContainer for \"34e60d1ec392ede11469327c7a9145e53e7bab6670f8eb586c8eb45f58968cf0\"" Feb 9 18:50:50.761703 systemd[1]: Started cri-containerd-34e60d1ec392ede11469327c7a9145e53e7bab6670f8eb586c8eb45f58968cf0.scope. Feb 9 18:50:50.792935 env[1122]: time="2024-02-09T18:50:50.792860263Z" level=info msg="StartContainer for \"34e60d1ec392ede11469327c7a9145e53e7bab6670f8eb586c8eb45f58968cf0\" returns successfully" Feb 9 18:50:50.848761 systemd-networkd[1026]: lxc86c9411c9d3a: Gained IPv6LL Feb 9 18:50:51.288477 kubelet[1392]: E0209 18:50:51.288406 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:51.438966 kubelet[1392]: I0209 18:50:51.438938 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.815991002 podCreationTimestamp="2024-02-09 18:50:31 +0000 UTC" firstStartedPulling="2024-02-09 18:50:50.106090446 +0000 UTC m=+54.203956964" lastFinishedPulling="2024-02-09 18:50:50.729006043 +0000 UTC m=+54.826872562" observedRunningTime="2024-02-09 18:50:51.438253369 +0000 UTC m=+55.536119887" watchObservedRunningTime="2024-02-09 18:50:51.4389066 +0000 UTC m=+55.536773118" Feb 9 18:50:52.289578 kubelet[1392]: E0209 18:50:52.289515 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:53.290439 kubelet[1392]: E0209 18:50:53.290357 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:54.291163 kubelet[1392]: E0209 18:50:54.291108 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:54.535255 env[1122]: time="2024-02-09T18:50:54.535199854Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:50:54.540010 env[1122]: time="2024-02-09T18:50:54.539975503Z" level=info msg="StopContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" with timeout 2 (s)" Feb 9 18:50:54.540271 env[1122]: time="2024-02-09T18:50:54.540243969Z" level=info msg="Stop container \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" with signal terminated" Feb 9 18:50:54.546707 systemd-networkd[1026]: lxc_health: Link DOWN Feb 9 18:50:54.546713 systemd-networkd[1026]: lxc_health: Lost carrier Feb 9 18:50:54.583999 systemd[1]: cri-containerd-4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b.scope: Deactivated successfully. Feb 9 18:50:54.584269 systemd[1]: cri-containerd-4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b.scope: Consumed 5.945s CPU time. Feb 9 18:50:54.600150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b-rootfs.mount: Deactivated successfully. Feb 9 18:50:54.754270 env[1122]: time="2024-02-09T18:50:54.754218529Z" level=info msg="shim disconnected" id=4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b Feb 9 18:50:54.754270 env[1122]: time="2024-02-09T18:50:54.754266218Z" level=warning msg="cleaning up after shim disconnected" id=4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b namespace=k8s.io Feb 9 18:50:54.754270 env[1122]: time="2024-02-09T18:50:54.754275055Z" level=info msg="cleaning up dead shim" Feb 9 18:50:54.760510 env[1122]: time="2024-02-09T18:50:54.760477661Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n" Feb 9 18:50:54.863929 env[1122]: time="2024-02-09T18:50:54.863823420Z" level=info msg="StopContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" returns successfully" Feb 9 18:50:54.864697 env[1122]: time="2024-02-09T18:50:54.864645327Z" level=info msg="StopPodSandbox for \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\"" Feb 9 18:50:54.864881 env[1122]: time="2024-02-09T18:50:54.864716813Z" level=info msg="Container to stop \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:50:54.864881 env[1122]: time="2024-02-09T18:50:54.864731881Z" level=info msg="Container to stop \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:50:54.864881 env[1122]: time="2024-02-09T18:50:54.864742010Z" level=info msg="Container to stop \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:50:54.864881 env[1122]: time="2024-02-09T18:50:54.864752439Z" level=info msg="Container to stop \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:50:54.864881 env[1122]: time="2024-02-09T18:50:54.864761807Z" level=info msg="Container to stop \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:50:54.866204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca-shm.mount: Deactivated successfully. Feb 9 18:50:54.869694 systemd[1]: cri-containerd-e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca.scope: Deactivated successfully. Feb 9 18:50:54.882409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca-rootfs.mount: Deactivated successfully. Feb 9 18:50:54.989805 env[1122]: time="2024-02-09T18:50:54.989758623Z" level=info msg="shim disconnected" id=e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca Feb 9 18:50:54.990012 env[1122]: time="2024-02-09T18:50:54.989971002Z" level=warning msg="cleaning up after shim disconnected" id=e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca namespace=k8s.io Feb 9 18:50:54.990012 env[1122]: time="2024-02-09T18:50:54.989992674Z" level=info msg="cleaning up dead shim" Feb 9 18:50:54.996241 env[1122]: time="2024-02-09T18:50:54.996201971Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" Feb 9 18:50:54.996534 env[1122]: time="2024-02-09T18:50:54.996511194Z" level=info msg="TearDown network for sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" successfully" Feb 9 18:50:54.996565 env[1122]: time="2024-02-09T18:50:54.996533316Z" level=info msg="StopPodSandbox for \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" returns successfully" Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161175 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cni-path\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161213 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-kernel\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161238 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsrmn\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-kube-api-access-jsrmn\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161257 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16358afd-f262-4a8e-8b8d-154c71a46ed4-clustermesh-secrets\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161273 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-net\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.161758 kubelet[1392]: I0209 18:50:55.161283 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cni-path" (OuterVolumeSpecName: "cni-path") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161292 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-config-path\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161328 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161343 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-etc-cni-netd\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161430 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-hubble-tls\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161452 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-xtables-lock\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162024 kubelet[1392]: I0209 18:50:55.161470 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-cgroup\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161487 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-hostproc\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161501 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-lib-modules\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161515 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-bpf-maps\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161534 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-run\") pod \"16358afd-f262-4a8e-8b8d-154c71a46ed4\" (UID: \"16358afd-f262-4a8e-8b8d-154c71a46ed4\") " Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161567 1392 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cni-path\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.162170 kubelet[1392]: I0209 18:50:55.161598 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-kernel\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.162309 kubelet[1392]: I0209 18:50:55.161624 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162309 kubelet[1392]: I0209 18:50:55.161352 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162481 kubelet[1392]: I0209 18:50:55.162393 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162481 kubelet[1392]: I0209 18:50:55.162422 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162481 kubelet[1392]: I0209 18:50:55.162420 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-hostproc" (OuterVolumeSpecName: "hostproc") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162481 kubelet[1392]: I0209 18:50:55.162452 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162481 kubelet[1392]: I0209 18:50:55.162436 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.162637 kubelet[1392]: I0209 18:50:55.162470 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:55.163534 kubelet[1392]: I0209 18:50:55.163471 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:50:55.164836 systemd[1]: var-lib-kubelet-pods-16358afd\x2df262\x2d4a8e\x2d8b8d\x2d154c71a46ed4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:50:55.165037 kubelet[1392]: I0209 18:50:55.165014 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:50:55.165372 kubelet[1392]: I0209 18:50:55.165344 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-kube-api-access-jsrmn" (OuterVolumeSpecName: "kube-api-access-jsrmn") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "kube-api-access-jsrmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:50:55.165675 kubelet[1392]: I0209 18:50:55.165658 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16358afd-f262-4a8e-8b8d-154c71a46ed4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16358afd-f262-4a8e-8b8d-154c71a46ed4" (UID: "16358afd-f262-4a8e-8b8d-154c71a46ed4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:50:55.262141 kubelet[1392]: I0209 18:50:55.262123 1392 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-hubble-tls\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262141 kubelet[1392]: I0209 18:50:55.262142 1392 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-xtables-lock\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262161 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-cgroup\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262169 1392 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-hostproc\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262177 1392 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-lib-modules\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262185 1392 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-bpf-maps\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262195 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-run\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262205 1392 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jsrmn\" (UniqueName: \"kubernetes.io/projected/16358afd-f262-4a8e-8b8d-154c71a46ed4-kube-api-access-jsrmn\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262213 1392 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16358afd-f262-4a8e-8b8d-154c71a46ed4-clustermesh-secrets\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262327 kubelet[1392]: I0209 18:50:55.262222 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-host-proc-sys-net\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262499 kubelet[1392]: I0209 18:50:55.262231 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16358afd-f262-4a8e-8b8d-154c71a46ed4-cilium-config-path\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.262499 kubelet[1392]: I0209 18:50:55.262239 1392 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16358afd-f262-4a8e-8b8d-154c71a46ed4-etc-cni-netd\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:55.291416 kubelet[1392]: E0209 18:50:55.291402 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:55.440548 kubelet[1392]: I0209 18:50:55.440451 1392 scope.go:117] "RemoveContainer" containerID="4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b" Feb 9 18:50:55.442678 env[1122]: time="2024-02-09T18:50:55.442495175Z" level=info msg="RemoveContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\"" Feb 9 18:50:55.443948 systemd[1]: Removed slice kubepods-burstable-pod16358afd_f262_4a8e_8b8d_154c71a46ed4.slice. Feb 9 18:50:55.444046 systemd[1]: kubepods-burstable-pod16358afd_f262_4a8e_8b8d_154c71a46ed4.slice: Consumed 6.036s CPU time. Feb 9 18:50:55.446212 env[1122]: time="2024-02-09T18:50:55.446173708Z" level=info msg="RemoveContainer for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" returns successfully" Feb 9 18:50:55.446454 kubelet[1392]: I0209 18:50:55.446433 1392 scope.go:117] "RemoveContainer" containerID="ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5" Feb 9 18:50:55.448425 env[1122]: time="2024-02-09T18:50:55.448380372Z" level=info msg="RemoveContainer for \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\"" Feb 9 18:50:55.451552 env[1122]: time="2024-02-09T18:50:55.451507377Z" level=info msg="RemoveContainer for \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\" returns successfully" Feb 9 18:50:55.451701 kubelet[1392]: I0209 18:50:55.451684 1392 scope.go:117] "RemoveContainer" containerID="69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d" Feb 9 18:50:55.452705 env[1122]: time="2024-02-09T18:50:55.452676046Z" level=info msg="RemoveContainer for \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\"" Feb 9 18:50:55.455454 env[1122]: time="2024-02-09T18:50:55.455418056Z" level=info msg="RemoveContainer for \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\" returns successfully" Feb 9 18:50:55.455578 kubelet[1392]: I0209 18:50:55.455554 1392 scope.go:117] "RemoveContainer" containerID="d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c" Feb 9 18:50:55.456367 env[1122]: time="2024-02-09T18:50:55.456331034Z" level=info msg="RemoveContainer for \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\"" Feb 9 18:50:55.458974 env[1122]: time="2024-02-09T18:50:55.458949212Z" level=info msg="RemoveContainer for \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\" returns successfully" Feb 9 18:50:55.459134 kubelet[1392]: I0209 18:50:55.459110 1392 scope.go:117] "RemoveContainer" containerID="f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991" Feb 9 18:50:55.460058 env[1122]: time="2024-02-09T18:50:55.460033532Z" level=info msg="RemoveContainer for \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\"" Feb 9 18:50:55.462801 env[1122]: time="2024-02-09T18:50:55.462768539Z" level=info msg="RemoveContainer for \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\" returns successfully" Feb 9 18:50:55.462952 kubelet[1392]: I0209 18:50:55.462924 1392 scope.go:117] "RemoveContainer" containerID="4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b" Feb 9 18:50:55.463154 env[1122]: time="2024-02-09T18:50:55.463089443Z" level=error msg="ContainerStatus for \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\": not found" Feb 9 18:50:55.463321 kubelet[1392]: E0209 18:50:55.463296 1392 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\": not found" containerID="4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b" Feb 9 18:50:55.463438 kubelet[1392]: I0209 18:50:55.463416 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b"} err="failed to get container status \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4152005ee727e66ebf6fc57d983f86c14110d77efaf9332919852a603e4eed3b\": not found" Feb 9 18:50:55.463474 kubelet[1392]: I0209 18:50:55.463444 1392 scope.go:117] "RemoveContainer" containerID="ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5" Feb 9 18:50:55.463837 env[1122]: time="2024-02-09T18:50:55.463761508Z" level=error msg="ContainerStatus for \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\": not found" Feb 9 18:50:55.463975 kubelet[1392]: E0209 18:50:55.463959 1392 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\": not found" containerID="ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5" Feb 9 18:50:55.464035 kubelet[1392]: I0209 18:50:55.463988 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5"} err="failed to get container status \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad693fcd548dd7fadd16efefd066273ee917ffdcc5c1cddee7b2aced96719ee5\": not found" Feb 9 18:50:55.464035 kubelet[1392]: I0209 18:50:55.464000 1392 scope.go:117] "RemoveContainer" containerID="69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d" Feb 9 18:50:55.464180 env[1122]: time="2024-02-09T18:50:55.464143517Z" level=error msg="ContainerStatus for \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\": not found" Feb 9 18:50:55.464284 kubelet[1392]: E0209 18:50:55.464270 1392 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\": not found" containerID="69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d" Feb 9 18:50:55.464337 kubelet[1392]: I0209 18:50:55.464301 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d"} err="failed to get container status \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\": rpc error: code = NotFound desc = an error occurred when try to find container \"69e077af4e595a9024be6db4aaff48cfb4a1c8b0ae47e862bc24a81fc8c9e93d\": not found" Feb 9 18:50:55.464337 kubelet[1392]: I0209 18:50:55.464323 1392 scope.go:117] "RemoveContainer" containerID="d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c" Feb 9 18:50:55.464540 env[1122]: time="2024-02-09T18:50:55.464480862Z" level=error msg="ContainerStatus for \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\": not found" Feb 9 18:50:55.464642 kubelet[1392]: E0209 18:50:55.464630 1392 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\": not found" containerID="d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c" Feb 9 18:50:55.464691 kubelet[1392]: I0209 18:50:55.464649 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c"} err="failed to get container status \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d24f8730aa833a81550143bd5cddbe4e337f1571692dc742f1508b325344975c\": not found" Feb 9 18:50:55.464691 kubelet[1392]: I0209 18:50:55.464656 1392 scope.go:117] "RemoveContainer" containerID="f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991" Feb 9 18:50:55.464831 env[1122]: time="2024-02-09T18:50:55.464782740Z" level=error msg="ContainerStatus for \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\": not found" Feb 9 18:50:55.464915 kubelet[1392]: E0209 18:50:55.464896 1392 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\": not found" containerID="f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991" Feb 9 18:50:55.464915 kubelet[1392]: I0209 18:50:55.464911 1392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991"} err="failed to get container status \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\": rpc error: code = NotFound desc = an error occurred when try to find container \"f90b702809ce31384c7c8e4f8a5e36fafa25eead6515d5e36917453dde77e991\": not found" Feb 9 18:50:55.524208 systemd[1]: var-lib-kubelet-pods-16358afd\x2df262\x2d4a8e\x2d8b8d\x2d154c71a46ed4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsrmn.mount: Deactivated successfully. Feb 9 18:50:55.524327 systemd[1]: var-lib-kubelet-pods-16358afd\x2df262\x2d4a8e\x2d8b8d\x2d154c71a46ed4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:50:56.253539 kubelet[1392]: E0209 18:50:56.253471 1392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:56.258209 env[1122]: time="2024-02-09T18:50:56.258163164Z" level=info msg="StopPodSandbox for \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\"" Feb 9 18:50:56.258604 env[1122]: time="2024-02-09T18:50:56.258256260Z" level=info msg="TearDown network for sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" successfully" Feb 9 18:50:56.258604 env[1122]: time="2024-02-09T18:50:56.258304139Z" level=info msg="StopPodSandbox for \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" returns successfully" Feb 9 18:50:56.259337 env[1122]: time="2024-02-09T18:50:56.259302789Z" level=info msg="RemovePodSandbox for \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\"" Feb 9 18:50:56.259425 env[1122]: time="2024-02-09T18:50:56.259337213Z" level=info msg="Forcibly stopping sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\"" Feb 9 18:50:56.259425 env[1122]: time="2024-02-09T18:50:56.259406765Z" level=info msg="TearDown network for sandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" successfully" Feb 9 18:50:56.261981 env[1122]: time="2024-02-09T18:50:56.261956652Z" level=info msg="RemovePodSandbox \"e974ad33e263e9e1ffbbcb577bd90d7a49392998414708becda4caea08d41cca\" returns successfully" Feb 9 18:50:56.292340 kubelet[1392]: E0209 18:50:56.292304 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:56.320436 kubelet[1392]: E0209 18:50:56.320376 1392 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:50:56.335070 kubelet[1392]: I0209 18:50:56.335046 1392 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" path="/var/lib/kubelet/pods/16358afd-f262-4a8e-8b8d-154c71a46ed4/volumes" Feb 9 18:50:56.938511 kubelet[1392]: I0209 18:50:56.938466 1392 topology_manager.go:215] "Topology Admit Handler" podUID="c9066b0d-b02f-495b-bc2c-65adc70852b4" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-485z9" Feb 9 18:50:56.938710 kubelet[1392]: E0209 18:50:56.938526 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="apply-sysctl-overwrites" Feb 9 18:50:56.938710 kubelet[1392]: E0209 18:50:56.938538 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="cilium-agent" Feb 9 18:50:56.938710 kubelet[1392]: E0209 18:50:56.938544 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="mount-cgroup" Feb 9 18:50:56.938710 kubelet[1392]: E0209 18:50:56.938551 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="mount-bpf-fs" Feb 9 18:50:56.938710 kubelet[1392]: E0209 18:50:56.938557 1392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="clean-cilium-state" Feb 9 18:50:56.938710 kubelet[1392]: I0209 18:50:56.938574 1392 memory_manager.go:346] "RemoveStaleState removing state" podUID="16358afd-f262-4a8e-8b8d-154c71a46ed4" containerName="cilium-agent" Feb 9 18:50:56.942897 systemd[1]: Created slice kubepods-besteffort-podc9066b0d_b02f_495b_bc2c_65adc70852b4.slice. Feb 9 18:50:56.953936 kubelet[1392]: I0209 18:50:56.953899 1392 topology_manager.go:215] "Topology Admit Handler" podUID="70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" podNamespace="kube-system" podName="cilium-9mdc6" Feb 9 18:50:56.956793 kubelet[1392]: W0209 18:50:56.956772 1392 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.956793 kubelet[1392]: E0209 18:50:56.956797 1392 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.956957 kubelet[1392]: W0209 18:50:56.956890 1392 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.956957 kubelet[1392]: E0209 18:50:56.956909 1392 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.957231 kubelet[1392]: W0209 18:50:56.957209 1392 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.957309 kubelet[1392]: E0209 18:50:56.957245 1392 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.31' and this object Feb 9 18:50:56.958373 systemd[1]: Created slice kubepods-burstable-pod70c3ebd5_1480_41d1_9cd0_28a3aacaa9f7.slice. Feb 9 18:50:57.070482 kubelet[1392]: I0209 18:50:57.070414 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-cgroup\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070482 kubelet[1392]: I0209 18:50:57.070467 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-xtables-lock\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070482 kubelet[1392]: I0209 18:50:57.070486 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-ipsec-secrets\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070752 kubelet[1392]: I0209 18:50:57.070506 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-kernel\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070752 kubelet[1392]: I0209 18:50:57.070626 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-etc-cni-netd\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070752 kubelet[1392]: I0209 18:50:57.070652 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-config-path\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070752 kubelet[1392]: I0209 18:50:57.070669 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8qjj\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-kube-api-access-f8qjj\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070752 kubelet[1392]: I0209 18:50:57.070711 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9066b0d-b02f-495b-bc2c-65adc70852b4-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-485z9\" (UID: \"c9066b0d-b02f-495b-bc2c-65adc70852b4\") " pod="kube-system/cilium-operator-6bc8ccdb58-485z9" Feb 9 18:50:57.070914 kubelet[1392]: I0209 18:50:57.070732 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-run\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070914 kubelet[1392]: I0209 18:50:57.070762 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-bpf-maps\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070914 kubelet[1392]: I0209 18:50:57.070831 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hostproc\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070914 kubelet[1392]: I0209 18:50:57.070880 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-clustermesh-secrets\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.070914 kubelet[1392]: I0209 18:50:57.070908 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-net\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.071060 kubelet[1392]: I0209 18:50:57.070957 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cni-path\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.071060 kubelet[1392]: I0209 18:50:57.070994 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-lib-modules\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.071060 kubelet[1392]: I0209 18:50:57.071017 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hubble-tls\") pod \"cilium-9mdc6\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " pod="kube-system/cilium-9mdc6" Feb 9 18:50:57.071060 kubelet[1392]: I0209 18:50:57.071045 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6n4g\" (UniqueName: \"kubernetes.io/projected/c9066b0d-b02f-495b-bc2c-65adc70852b4-kube-api-access-p6n4g\") pod \"cilium-operator-6bc8ccdb58-485z9\" (UID: \"c9066b0d-b02f-495b-bc2c-65adc70852b4\") " pod="kube-system/cilium-operator-6bc8ccdb58-485z9" Feb 9 18:50:57.093492 kubelet[1392]: E0209 18:50:57.093469 1392 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-f8qjj lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-9mdc6" podUID="70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" Feb 9 18:50:57.292633 kubelet[1392]: E0209 18:50:57.292600 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:57.545483 kubelet[1392]: E0209 18:50:57.545389 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:57.545866 env[1122]: time="2024-02-09T18:50:57.545821869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-485z9,Uid:c9066b0d-b02f-495b-bc2c-65adc70852b4,Namespace:kube-system,Attempt:0,}" Feb 9 18:50:57.575004 kubelet[1392]: I0209 18:50:57.574959 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-lib-modules\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575004 kubelet[1392]: I0209 18:50:57.574993 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-kernel\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575004 kubelet[1392]: I0209 18:50:57.575018 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-config-path\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575220 kubelet[1392]: I0209 18:50:57.575035 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-run\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575220 kubelet[1392]: I0209 18:50:57.575069 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.575220 kubelet[1392]: I0209 18:50:57.575092 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.575220 kubelet[1392]: I0209 18:50:57.575178 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.575677 kubelet[1392]: I0209 18:50:57.575647 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-cgroup\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575677 kubelet[1392]: I0209 18:50:57.575679 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-xtables-lock\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575677 kubelet[1392]: I0209 18:50:57.575697 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-etc-cni-netd\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575872 kubelet[1392]: I0209 18:50:57.575713 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-bpf-maps\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575872 kubelet[1392]: I0209 18:50:57.575730 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-net\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575872 kubelet[1392]: I0209 18:50:57.575746 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cni-path\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.575872 kubelet[1392]: I0209 18:50:57.575741 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.575872 kubelet[1392]: I0209 18:50:57.575770 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8qjj\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-kube-api-access-f8qjj\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575775 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575784 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hostproc\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575792 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575806 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575856 1392 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-lib-modules\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576023 kubelet[1392]: I0209 18:50:57.575870 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-kernel\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575879 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-run\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575888 1392 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-host-proc-sys-net\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575896 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-cgroup\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575904 1392 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-etc-cni-netd\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575911 1392 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-bpf-maps\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575934 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hostproc" (OuterVolumeSpecName: "hostproc") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576163 kubelet[1392]: I0209 18:50:57.575949 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cni-path" (OuterVolumeSpecName: "cni-path") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576321 kubelet[1392]: I0209 18:50:57.576144 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:50:57.576755 kubelet[1392]: I0209 18:50:57.576726 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:50:57.577981 kubelet[1392]: I0209 18:50:57.577958 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-kube-api-access-f8qjj" (OuterVolumeSpecName: "kube-api-access-f8qjj") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "kube-api-access-f8qjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:50:57.578978 systemd[1]: var-lib-kubelet-pods-70c3ebd5\x2d1480\x2d41d1\x2d9cd0\x2d28a3aacaa9f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8qjj.mount: Deactivated successfully. Feb 9 18:50:57.642697 env[1122]: time="2024-02-09T18:50:57.642620698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:57.642697 env[1122]: time="2024-02-09T18:50:57.642661395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:57.642697 env[1122]: time="2024-02-09T18:50:57.642672135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:57.642901 env[1122]: time="2024-02-09T18:50:57.642807820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/428cc5a659f61519fe27f7e361a2d656ec125579cf3b934e2f67f24fbdee4455 pid=2948 runtime=io.containerd.runc.v2 Feb 9 18:50:57.654703 systemd[1]: Started cri-containerd-428cc5a659f61519fe27f7e361a2d656ec125579cf3b934e2f67f24fbdee4455.scope. Feb 9 18:50:57.676653 kubelet[1392]: I0209 18:50:57.676565 1392 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f8qjj\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-kube-api-access-f8qjj\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.676653 kubelet[1392]: I0209 18:50:57.676612 1392 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hostproc\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.676653 kubelet[1392]: I0209 18:50:57.676623 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-config-path\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.676653 kubelet[1392]: I0209 18:50:57.676631 1392 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-xtables-lock\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.676653 kubelet[1392]: I0209 18:50:57.676639 1392 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cni-path\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:57.685443 env[1122]: time="2024-02-09T18:50:57.685398909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-485z9,Uid:c9066b0d-b02f-495b-bc2c-65adc70852b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"428cc5a659f61519fe27f7e361a2d656ec125579cf3b934e2f67f24fbdee4455\"" Feb 9 18:50:57.686077 kubelet[1392]: E0209 18:50:57.686056 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:57.687225 env[1122]: time="2024-02-09T18:50:57.687199305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:50:57.878400 kubelet[1392]: I0209 18:50:57.878293 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-ipsec-secrets\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:57.880959 kubelet[1392]: I0209 18:50:57.880902 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:50:57.978931 kubelet[1392]: I0209 18:50:57.978899 1392 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-cilium-ipsec-secrets\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:58.079366 kubelet[1392]: I0209 18:50:58.079326 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-clustermesh-secrets\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:58.079366 kubelet[1392]: I0209 18:50:58.079363 1392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hubble-tls\") pod \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\" (UID: \"70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7\") " Feb 9 18:50:58.081866 kubelet[1392]: I0209 18:50:58.081830 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:50:58.082013 kubelet[1392]: I0209 18:50:58.081998 1392 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" (UID: "70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:50:58.180542 kubelet[1392]: I0209 18:50:58.180401 1392 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-clustermesh-secrets\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:58.180542 kubelet[1392]: I0209 18:50:58.180447 1392 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7-hubble-tls\") on node \"10.0.0.31\" DevicePath \"\"" Feb 9 18:50:58.293565 kubelet[1392]: E0209 18:50:58.293524 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:58.337848 systemd[1]: Removed slice kubepods-burstable-pod70c3ebd5_1480_41d1_9cd0_28a3aacaa9f7.slice. Feb 9 18:50:58.916617 kubelet[1392]: I0209 18:50:58.916564 1392 topology_manager.go:215] "Topology Admit Handler" podUID="dc77e373-bf07-48df-b2a9-3a51f921900c" podNamespace="kube-system" podName="cilium-gn8cn" Feb 9 18:50:58.921407 systemd[1]: Created slice kubepods-burstable-poddc77e373_bf07_48df_b2a9_3a51f921900c.slice. Feb 9 18:50:58.957057 kubelet[1392]: I0209 18:50:58.957015 1392 setters.go:552] "Node became not ready" node="10.0.0.31" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T18:50:58Z","lastTransitionTime":"2024-02-09T18:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 18:50:59.086213 kubelet[1392]: I0209 18:50:59.086181 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-bpf-maps\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086390 kubelet[1392]: I0209 18:50:59.086231 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf47\" (UniqueName: \"kubernetes.io/projected/dc77e373-bf07-48df-b2a9-3a51f921900c-kube-api-access-rrf47\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086390 kubelet[1392]: I0209 18:50:59.086331 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc77e373-bf07-48df-b2a9-3a51f921900c-cilium-config-path\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086390 kubelet[1392]: I0209 18:50:59.086377 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-cilium-cgroup\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086486 kubelet[1392]: I0209 18:50:59.086406 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-cni-path\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086486 kubelet[1392]: I0209 18:50:59.086442 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-etc-cni-netd\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086486 kubelet[1392]: I0209 18:50:59.086467 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc77e373-bf07-48df-b2a9-3a51f921900c-cilium-ipsec-secrets\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086576 kubelet[1392]: I0209 18:50:59.086496 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-hostproc\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086576 kubelet[1392]: I0209 18:50:59.086531 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-xtables-lock\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086576 kubelet[1392]: I0209 18:50:59.086556 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc77e373-bf07-48df-b2a9-3a51f921900c-clustermesh-secrets\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086683 kubelet[1392]: I0209 18:50:59.086579 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-host-proc-sys-net\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086683 kubelet[1392]: I0209 18:50:59.086637 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc77e373-bf07-48df-b2a9-3a51f921900c-hubble-tls\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086683 kubelet[1392]: I0209 18:50:59.086663 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-cilium-run\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086766 kubelet[1392]: I0209 18:50:59.086686 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-lib-modules\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.086766 kubelet[1392]: I0209 18:50:59.086710 1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc77e373-bf07-48df-b2a9-3a51f921900c-host-proc-sys-kernel\") pod \"cilium-gn8cn\" (UID: \"dc77e373-bf07-48df-b2a9-3a51f921900c\") " pod="kube-system/cilium-gn8cn" Feb 9 18:50:59.227298 kubelet[1392]: E0209 18:50:59.227182 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:59.227995 env[1122]: time="2024-02-09T18:50:59.227940570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gn8cn,Uid:dc77e373-bf07-48df-b2a9-3a51f921900c,Namespace:kube-system,Attempt:0,}" Feb 9 18:50:59.243224 env[1122]: time="2024-02-09T18:50:59.243164006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:50:59.243224 env[1122]: time="2024-02-09T18:50:59.243207247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:50:59.243355 env[1122]: time="2024-02-09T18:50:59.243223698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:50:59.243402 env[1122]: time="2024-02-09T18:50:59.243347571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f pid=2999 runtime=io.containerd.runc.v2 Feb 9 18:50:59.256952 systemd[1]: Started cri-containerd-f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f.scope. Feb 9 18:50:59.276438 env[1122]: time="2024-02-09T18:50:59.276399471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gn8cn,Uid:dc77e373-bf07-48df-b2a9-3a51f921900c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\"" Feb 9 18:50:59.277125 kubelet[1392]: E0209 18:50:59.277102 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:59.280466 env[1122]: time="2024-02-09T18:50:59.280417146Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:50:59.293714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708795890.mount: Deactivated successfully. Feb 9 18:50:59.298031 kubelet[1392]: E0209 18:50:59.297823 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:50:59.297427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760381452.mount: Deactivated successfully. Feb 9 18:50:59.303152 env[1122]: time="2024-02-09T18:50:59.303100469Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8\"" Feb 9 18:50:59.303954 env[1122]: time="2024-02-09T18:50:59.303927514Z" level=info msg="StartContainer for \"cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8\"" Feb 9 18:50:59.322203 systemd[1]: Started cri-containerd-cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8.scope. Feb 9 18:50:59.356262 systemd[1]: cri-containerd-cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8.scope: Deactivated successfully. Feb 9 18:50:59.391186 env[1122]: time="2024-02-09T18:50:59.391137912Z" level=info msg="StartContainer for \"cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8\" returns successfully" Feb 9 18:50:59.414647 env[1122]: time="2024-02-09T18:50:59.414597485Z" level=info msg="shim disconnected" id=cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8 Feb 9 18:50:59.414647 env[1122]: time="2024-02-09T18:50:59.414637710Z" level=warning msg="cleaning up after shim disconnected" id=cac555d93115310d5dbf478e247cb9afbd55b5be68ff380ec61866f2b3a124c8 namespace=k8s.io Feb 9 18:50:59.414647 env[1122]: time="2024-02-09T18:50:59.414645926Z" level=info msg="cleaning up dead shim" Feb 9 18:50:59.420895 env[1122]: time="2024-02-09T18:50:59.420856594Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3081 runtime=io.containerd.runc.v2\n" Feb 9 18:50:59.450076 kubelet[1392]: E0209 18:50:59.450042 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:50:59.451393 env[1122]: time="2024-02-09T18:50:59.451365052Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:50:59.461609 env[1122]: time="2024-02-09T18:50:59.461566734Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be\"" Feb 9 18:50:59.461939 env[1122]: time="2024-02-09T18:50:59.461897306Z" level=info msg="StartContainer for \"d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be\"" Feb 9 18:50:59.478648 systemd[1]: Started cri-containerd-d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be.scope. Feb 9 18:50:59.504946 env[1122]: time="2024-02-09T18:50:59.504896661Z" level=info msg="StartContainer for \"d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be\" returns successfully" Feb 9 18:50:59.505252 systemd[1]: cri-containerd-d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be.scope: Deactivated successfully. Feb 9 18:50:59.534505 env[1122]: time="2024-02-09T18:50:59.534457456Z" level=info msg="shim disconnected" id=d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be Feb 9 18:50:59.534505 env[1122]: time="2024-02-09T18:50:59.534501419Z" level=warning msg="cleaning up after shim disconnected" id=d6decd4fdd158640ea913d917fed7ddbb4c25576c3d1c938befa3b0aca8a80be namespace=k8s.io Feb 9 18:50:59.534505 env[1122]: time="2024-02-09T18:50:59.534510385Z" level=info msg="cleaning up dead shim" Feb 9 18:50:59.541776 env[1122]: time="2024-02-09T18:50:59.541752513Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3142 runtime=io.containerd.runc.v2\n" Feb 9 18:51:00.054976 env[1122]: time="2024-02-09T18:51:00.054923497Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:00.056525 env[1122]: time="2024-02-09T18:51:00.056502245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:00.057880 env[1122]: time="2024-02-09T18:51:00.057854247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:51:00.058329 env[1122]: time="2024-02-09T18:51:00.058295987Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 18:51:00.059860 env[1122]: time="2024-02-09T18:51:00.059818610Z" level=info msg="CreateContainer within sandbox \"428cc5a659f61519fe27f7e361a2d656ec125579cf3b934e2f67f24fbdee4455\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:51:00.069538 env[1122]: time="2024-02-09T18:51:00.069493438Z" level=info msg="CreateContainer within sandbox \"428cc5a659f61519fe27f7e361a2d656ec125579cf3b934e2f67f24fbdee4455\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cd16e36b0842efdffa9aa9e78b101e84b2f0a9f6f50221a64c7fde3210642baa\"" Feb 9 18:51:00.069872 env[1122]: time="2024-02-09T18:51:00.069851301Z" level=info msg="StartContainer for \"cd16e36b0842efdffa9aa9e78b101e84b2f0a9f6f50221a64c7fde3210642baa\"" Feb 9 18:51:00.082000 systemd[1]: Started cri-containerd-cd16e36b0842efdffa9aa9e78b101e84b2f0a9f6f50221a64c7fde3210642baa.scope. Feb 9 18:51:00.251117 env[1122]: time="2024-02-09T18:51:00.251057005Z" level=info msg="StartContainer for \"cd16e36b0842efdffa9aa9e78b101e84b2f0a9f6f50221a64c7fde3210642baa\" returns successfully" Feb 9 18:51:00.298394 kubelet[1392]: E0209 18:51:00.298361 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:00.336063 kubelet[1392]: I0209 18:51:00.335972 1392 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7" path="/var/lib/kubelet/pods/70c3ebd5-1480-41d1-9cd0-28a3aacaa9f7/volumes" Feb 9 18:51:00.452960 kubelet[1392]: E0209 18:51:00.452936 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:00.454480 kubelet[1392]: E0209 18:51:00.454455 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:00.456289 env[1122]: time="2024-02-09T18:51:00.456235739Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:51:00.460195 kubelet[1392]: I0209 18:51:00.460137 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-485z9" podStartSLOduration=2.088467426 podCreationTimestamp="2024-02-09 18:50:56 +0000 UTC" firstStartedPulling="2024-02-09 18:50:57.686941832 +0000 UTC m=+61.784808350" lastFinishedPulling="2024-02-09 18:51:00.05856367 +0000 UTC m=+64.156430188" observedRunningTime="2024-02-09 18:51:00.459720179 +0000 UTC m=+64.557586728" watchObservedRunningTime="2024-02-09 18:51:00.460089264 +0000 UTC m=+64.557955782" Feb 9 18:51:00.469491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849475374.mount: Deactivated successfully. Feb 9 18:51:00.472796 env[1122]: time="2024-02-09T18:51:00.472744585Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03\"" Feb 9 18:51:00.473355 env[1122]: time="2024-02-09T18:51:00.473278950Z" level=info msg="StartContainer for \"da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03\"" Feb 9 18:51:00.490484 systemd[1]: Started cri-containerd-da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03.scope. Feb 9 18:51:00.513665 systemd[1]: cri-containerd-da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03.scope: Deactivated successfully. Feb 9 18:51:00.525884 env[1122]: time="2024-02-09T18:51:00.525847741Z" level=info msg="StartContainer for \"da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03\" returns successfully" Feb 9 18:51:00.545916 env[1122]: time="2024-02-09T18:51:00.545855455Z" level=info msg="shim disconnected" id=da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03 Feb 9 18:51:00.545916 env[1122]: time="2024-02-09T18:51:00.545899008Z" level=warning msg="cleaning up after shim disconnected" id=da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03 namespace=k8s.io Feb 9 18:51:00.545916 env[1122]: time="2024-02-09T18:51:00.545906833Z" level=info msg="cleaning up dead shim" Feb 9 18:51:00.551782 env[1122]: time="2024-02-09T18:51:00.551723087Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:51:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3240 runtime=io.containerd.runc.v2\n" Feb 9 18:51:01.282805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da5fc0a3c7bc233dd0c38863b6f72a5c2c0a1bf1c15e572374ad7a0374a9ff03-rootfs.mount: Deactivated successfully. Feb 9 18:51:01.298495 kubelet[1392]: E0209 18:51:01.298459 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:01.321309 kubelet[1392]: E0209 18:51:01.321288 1392 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:51:01.457567 kubelet[1392]: E0209 18:51:01.457536 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:01.457755 kubelet[1392]: E0209 18:51:01.457639 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:01.459228 env[1122]: time="2024-02-09T18:51:01.459169415Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:51:01.640034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406774429.mount: Deactivated successfully. Feb 9 18:51:01.775397 env[1122]: time="2024-02-09T18:51:01.775345285Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f\"" Feb 9 18:51:01.775881 env[1122]: time="2024-02-09T18:51:01.775856136Z" level=info msg="StartContainer for \"416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f\"" Feb 9 18:51:01.790803 systemd[1]: Started cri-containerd-416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f.scope. Feb 9 18:51:01.813623 systemd[1]: cri-containerd-416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f.scope: Deactivated successfully. Feb 9 18:51:01.904493 env[1122]: time="2024-02-09T18:51:01.904101653Z" level=info msg="StartContainer for \"416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f\" returns successfully" Feb 9 18:51:01.937645 env[1122]: time="2024-02-09T18:51:01.937596043Z" level=info msg="shim disconnected" id=416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f Feb 9 18:51:01.937645 env[1122]: time="2024-02-09T18:51:01.937640306Z" level=warning msg="cleaning up after shim disconnected" id=416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f namespace=k8s.io Feb 9 18:51:01.937645 env[1122]: time="2024-02-09T18:51:01.937648581Z" level=info msg="cleaning up dead shim" Feb 9 18:51:01.943545 env[1122]: time="2024-02-09T18:51:01.943513346Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:51:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3294 runtime=io.containerd.runc.v2\n" Feb 9 18:51:02.282326 systemd[1]: run-containerd-runc-k8s.io-416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f-runc.Q9omNT.mount: Deactivated successfully. Feb 9 18:51:02.282406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416cf286ae120125a94515fd7b105a78684bd4d219a1cc5ee686a9b59f84c77f-rootfs.mount: Deactivated successfully. Feb 9 18:51:02.299107 kubelet[1392]: E0209 18:51:02.299077 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:02.462522 kubelet[1392]: E0209 18:51:02.462493 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:02.464780 env[1122]: time="2024-02-09T18:51:02.464742244Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:51:02.670987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355683770.mount: Deactivated successfully. Feb 9 18:51:02.675325 env[1122]: time="2024-02-09T18:51:02.675288149Z" level=info msg="CreateContainer within sandbox \"f907329753dedbee6843394eddad3275ff72aada8179f4bf34741d233a39d61f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854\"" Feb 9 18:51:02.675871 env[1122]: time="2024-02-09T18:51:02.675827423Z" level=info msg="StartContainer for \"0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854\"" Feb 9 18:51:02.688855 systemd[1]: Started cri-containerd-0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854.scope. Feb 9 18:51:02.710840 env[1122]: time="2024-02-09T18:51:02.710795393Z" level=info msg="StartContainer for \"0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854\" returns successfully" Feb 9 18:51:02.930630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 18:51:03.299459 kubelet[1392]: E0209 18:51:03.299419 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:03.466770 kubelet[1392]: E0209 18:51:03.466741 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:03.676276 kubelet[1392]: I0209 18:51:03.676160 1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gn8cn" podStartSLOduration=5.676116234 podCreationTimestamp="2024-02-09 18:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:51:03.675942908 +0000 UTC m=+67.773809426" watchObservedRunningTime="2024-02-09 18:51:03.676116234 +0000 UTC m=+67.773982752" Feb 9 18:51:04.299571 kubelet[1392]: E0209 18:51:04.299527 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:05.228960 kubelet[1392]: E0209 18:51:05.228932 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:05.300095 kubelet[1392]: E0209 18:51:05.300054 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:05.341102 systemd-networkd[1026]: lxc_health: Link UP Feb 9 18:51:05.354175 systemd-networkd[1026]: lxc_health: Gained carrier Feb 9 18:51:05.354608 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:51:05.591675 systemd[1]: run-containerd-runc-k8s.io-0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854-runc.Qbuxih.mount: Deactivated successfully. Feb 9 18:51:06.300439 kubelet[1392]: E0209 18:51:06.300390 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:07.104733 systemd-networkd[1026]: lxc_health: Gained IPv6LL Feb 9 18:51:07.228842 kubelet[1392]: E0209 18:51:07.228809 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:07.300882 kubelet[1392]: E0209 18:51:07.300831 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:07.472876 kubelet[1392]: E0209 18:51:07.472770 1392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:51:08.301253 kubelet[1392]: E0209 18:51:08.301206 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:09.302197 kubelet[1392]: E0209 18:51:09.302142 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:10.303256 kubelet[1392]: E0209 18:51:10.303209 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:11.303893 kubelet[1392]: E0209 18:51:11.303836 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:11.863795 systemd[1]: run-containerd-runc-k8s.io-0c0978b34b8f572587c4b5199f96bd0e4519769f75142f1d9f32d3800dc05854-runc.Se1H7X.mount: Deactivated successfully. Feb 9 18:51:12.304907 kubelet[1392]: E0209 18:51:12.304862 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:51:13.305226 kubelet[1392]: E0209 18:51:13.305181 1392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"