Feb 9 18:58:37.782178 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:58:37.782204 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:58:37.782215 kernel: BIOS-provided physical RAM map: Feb 9 18:58:37.782223 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:58:37.782230 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:58:37.782238 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:58:37.782247 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 9 18:58:37.782256 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 9 18:58:37.782265 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 18:58:37.782273 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:58:37.782281 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 9 18:58:37.782288 kernel: NX (Execute Disable) protection: active Feb 9 18:58:37.782296 kernel: SMBIOS 2.8 present. Feb 9 18:58:37.782304 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 9 18:58:37.782316 kernel: Hypervisor detected: KVM Feb 9 18:58:37.782324 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:58:37.782333 kernel: kvm-clock: cpu 0, msr 44faa001, primary cpu clock Feb 9 18:58:37.782782 kernel: kvm-clock: using sched offset of 2119776661 cycles Feb 9 18:58:37.782799 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:58:37.782809 kernel: tsc: Detected 2794.750 MHz processor Feb 9 18:58:37.782818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:58:37.782827 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:58:37.782836 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 9 18:58:37.782848 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:58:37.782857 kernel: Using GB pages for direct mapping Feb 9 18:58:37.782865 kernel: ACPI: Early table checksum verification disabled Feb 9 18:58:37.782874 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 9 18:58:37.784664 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784677 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784687 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784695 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 9 18:58:37.784704 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784716 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784724 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:58:37.784733 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 9 18:58:37.784742 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 9 18:58:37.784751 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 9 18:58:37.784759 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 9 18:58:37.784767 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 9 18:58:37.784776 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 9 18:58:37.784790 kernel: No NUMA configuration found Feb 9 18:58:37.784799 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 9 18:58:37.784808 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 9 18:58:37.784818 kernel: Zone ranges: Feb 9 18:58:37.784827 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:58:37.784836 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 9 18:58:37.784847 kernel: Normal empty Feb 9 18:58:37.784856 kernel: Movable zone start for each node Feb 9 18:58:37.784866 kernel: Early memory node ranges Feb 9 18:58:37.784874 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:58:37.784884 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 9 18:58:37.784893 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 9 18:58:37.784905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:58:37.784915 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:58:37.784926 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 9 18:58:37.784938 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 18:58:37.784947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:58:37.784956 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 18:58:37.784965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 18:58:37.784974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:58:37.784983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:58:37.784992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:58:37.785001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:58:37.785011 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:58:37.785021 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:58:37.785030 kernel: TSC deadline timer available Feb 9 18:58:37.785039 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 18:58:37.785048 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 18:58:37.785067 kernel: kvm-guest: setup PV sched yield Feb 9 18:58:37.785076 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 9 18:58:37.785085 kernel: Booting paravirtualized kernel on KVM Feb 9 18:58:37.785095 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:58:37.785104 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 18:58:37.785115 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 18:58:37.785124 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 18:58:37.785133 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 18:58:37.785142 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 18:58:37.785151 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 9 18:58:37.785160 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:58:37.785170 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:58:37.785179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 9 18:58:37.785188 kernel: Policy zone: DMA32 Feb 9 18:58:37.785199 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:58:37.785210 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:58:37.785220 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:58:37.785229 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:58:37.785238 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:58:37.785248 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 9 18:58:37.785257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:58:37.785267 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:58:37.785276 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:58:37.785287 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:58:37.785297 kernel: rcu: RCU event tracing is enabled. Feb 9 18:58:37.785307 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:58:37.785316 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:58:37.785325 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:58:37.785335 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:58:37.785344 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:58:37.785353 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 18:58:37.785362 kernel: random: crng init done Feb 9 18:58:37.785373 kernel: Console: colour VGA+ 80x25 Feb 9 18:58:37.785382 kernel: printk: console [ttyS0] enabled Feb 9 18:58:37.785391 kernel: ACPI: Core revision 20210730 Feb 9 18:58:37.785400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 18:58:37.785409 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:58:37.785418 kernel: x2apic enabled Feb 9 18:58:37.785427 kernel: Switched APIC routing to physical x2apic. Feb 9 18:58:37.785436 kernel: kvm-guest: setup PV IPIs Feb 9 18:58:37.785445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 18:58:37.785455 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 18:58:37.785464 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 18:58:37.785473 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 18:58:37.785482 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 18:58:37.785491 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 18:58:37.785500 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:58:37.785509 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:58:37.785519 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:58:37.785528 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:58:37.785545 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 18:58:37.785554 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 18:58:37.785565 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 18:58:37.785575 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 18:58:37.785584 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:58:37.785594 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:58:37.785603 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:58:37.785612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:58:37.785622 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 18:58:37.785645 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:58:37.785655 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:58:37.785664 kernel: LSM: Security Framework initializing Feb 9 18:58:37.785674 kernel: SELinux: Initializing. Feb 9 18:58:37.785683 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:58:37.785693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:58:37.785703 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 18:58:37.785715 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 18:58:37.785725 kernel: ... version: 0 Feb 9 18:58:37.785734 kernel: ... bit width: 48 Feb 9 18:58:37.785744 kernel: ... generic registers: 6 Feb 9 18:58:37.785753 kernel: ... value mask: 0000ffffffffffff Feb 9 18:58:37.785762 kernel: ... max period: 00007fffffffffff Feb 9 18:58:37.785772 kernel: ... fixed-purpose events: 0 Feb 9 18:58:37.785781 kernel: ... event mask: 000000000000003f Feb 9 18:58:37.785791 kernel: signal: max sigframe size: 1776 Feb 9 18:58:37.785803 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:58:37.785813 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:58:37.785822 kernel: x86: Booting SMP configuration: Feb 9 18:58:37.785832 kernel: .... node #0, CPUs: #1 Feb 9 18:58:37.785841 kernel: kvm-clock: cpu 1, msr 44faa041, secondary cpu clock Feb 9 18:58:37.785851 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 18:58:37.785860 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 9 18:58:37.785870 kernel: #2 Feb 9 18:58:37.785880 kernel: kvm-clock: cpu 2, msr 44faa081, secondary cpu clock Feb 9 18:58:37.785889 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 18:58:37.785901 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 9 18:58:37.785911 kernel: #3 Feb 9 18:58:37.785920 kernel: kvm-clock: cpu 3, msr 44faa0c1, secondary cpu clock Feb 9 18:58:37.785930 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 18:58:37.785939 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 9 18:58:37.785948 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:58:37.785958 kernel: smpboot: Max logical packages: 1 Feb 9 18:58:37.785967 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 18:58:37.785976 kernel: devtmpfs: initialized Feb 9 18:58:37.785988 kernel: x86/mm: Memory block size: 128MB Feb 9 18:58:37.785997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:58:37.786007 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:58:37.786016 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:58:37.786026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:58:37.786035 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:58:37.786045 kernel: audit: type=2000 audit(1707505117.627:1): state=initialized audit_enabled=0 res=1 Feb 9 18:58:37.786054 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:58:37.786073 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:58:37.786085 kernel: cpuidle: using governor menu Feb 9 18:58:37.786094 kernel: ACPI: bus type PCI registered Feb 9 18:58:37.786103 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:58:37.786113 kernel: dca service started, version 1.12.1 Feb 9 18:58:37.786123 kernel: PCI: Using configuration type 1 for base access Feb 9 18:58:37.786132 kernel: PCI: Using configuration type 1 for extended access Feb 9 18:58:37.786142 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:58:37.786152 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:58:37.786162 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:58:37.786174 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:58:37.786184 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:58:37.786193 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:58:37.786203 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:58:37.786212 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:58:37.786222 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:58:37.786232 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:58:37.786242 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:58:37.786251 kernel: ACPI: Interpreter enabled Feb 9 18:58:37.786262 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 18:58:37.786271 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:58:37.786281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:58:37.786291 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 18:58:37.786301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:58:37.786442 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:58:37.786459 kernel: acpiphp: Slot [3] registered Feb 9 18:58:37.786469 kernel: acpiphp: Slot [4] registered Feb 9 18:58:37.786481 kernel: acpiphp: Slot [5] registered Feb 9 18:58:37.786490 kernel: acpiphp: Slot [6] registered Feb 9 18:58:37.786499 kernel: acpiphp: Slot [7] registered Feb 9 18:58:37.786508 kernel: acpiphp: Slot [8] registered Feb 9 18:58:37.786517 kernel: acpiphp: Slot [9] registered Feb 9 18:58:37.786532 kernel: acpiphp: Slot [10] registered Feb 9 18:58:37.786541 kernel: acpiphp: Slot [11] registered Feb 9 18:58:37.786550 kernel: acpiphp: Slot [12] registered Feb 9 18:58:37.786559 kernel: acpiphp: Slot [13] registered Feb 9 18:58:37.786568 kernel: acpiphp: Slot [14] registered Feb 9 18:58:37.786580 kernel: acpiphp: Slot [15] registered Feb 9 18:58:37.786589 kernel: acpiphp: Slot [16] registered Feb 9 18:58:37.786599 kernel: acpiphp: Slot [17] registered Feb 9 18:58:37.786608 kernel: acpiphp: Slot [18] registered Feb 9 18:58:37.786617 kernel: acpiphp: Slot [19] registered Feb 9 18:58:37.786663 kernel: acpiphp: Slot [20] registered Feb 9 18:58:37.786675 kernel: acpiphp: Slot [21] registered Feb 9 18:58:37.786687 kernel: acpiphp: Slot [22] registered Feb 9 18:58:37.786697 kernel: acpiphp: Slot [23] registered Feb 9 18:58:37.786709 kernel: acpiphp: Slot [24] registered Feb 9 18:58:37.786718 kernel: acpiphp: Slot [25] registered Feb 9 18:58:37.786728 kernel: acpiphp: Slot [26] registered Feb 9 18:58:37.786737 kernel: acpiphp: Slot [27] registered Feb 9 18:58:37.786746 kernel: acpiphp: Slot [28] registered Feb 9 18:58:37.786755 kernel: acpiphp: Slot [29] registered Feb 9 18:58:37.786764 kernel: acpiphp: Slot [30] registered Feb 9 18:58:37.786773 kernel: acpiphp: Slot [31] registered Feb 9 18:58:37.786782 kernel: PCI host bridge to bus 0000:00 Feb 9 18:58:37.786891 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:58:37.786985 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:58:37.787076 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:58:37.787157 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 18:58:37.787235 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 18:58:37.787313 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:58:37.787432 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:58:37.787534 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:58:37.787645 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 18:58:37.787769 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 18:58:37.787874 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 18:58:37.787967 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 18:58:37.788069 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 18:58:37.788163 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 18:58:37.788268 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 18:58:37.788360 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 18:58:37.788450 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 18:58:37.788548 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 18:58:37.788656 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 9 18:58:37.788752 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 9 18:58:37.788847 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 9 18:58:37.788939 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:58:37.789040 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:58:37.789160 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 18:58:37.789286 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 9 18:58:37.789408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 9 18:58:37.789546 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 18:58:37.789675 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 18:58:37.789790 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 9 18:58:37.789900 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 9 18:58:37.790036 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 18:58:37.790174 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 18:58:37.790306 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 9 18:58:37.790429 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 9 18:58:37.790568 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 9 18:58:37.790593 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:58:37.790604 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:58:37.790613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:58:37.790623 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:58:37.790653 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:58:37.790664 kernel: iommu: Default domain type: Translated Feb 9 18:58:37.790674 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:58:37.790794 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 18:58:37.790934 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:58:37.791044 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 18:58:37.791065 kernel: vgaarb: loaded Feb 9 18:58:37.791076 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:58:37.791086 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 9 18:58:37.791096 kernel: PTP clock support registered Feb 9 18:58:37.791105 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:58:37.791115 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:58:37.791127 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:58:37.791137 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 9 18:58:37.791146 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 18:58:37.791156 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 18:58:37.791166 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:58:37.791175 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:58:37.791186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:58:37.791195 kernel: pnp: PnP ACPI init Feb 9 18:58:37.791290 kernel: pnp 00:02: [dma 2] Feb 9 18:58:37.791307 kernel: pnp: PnP ACPI: found 6 devices Feb 9 18:58:37.791317 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:58:37.791327 kernel: NET: Registered PF_INET protocol family Feb 9 18:58:37.791337 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:58:37.791347 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:58:37.791356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:58:37.791366 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:58:37.791376 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:58:37.791388 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:58:37.791398 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:58:37.791408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:58:37.791418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:58:37.791427 kernel: NET: Registered PF_XDP protocol family Feb 9 18:58:37.791513 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:58:37.791595 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:58:37.791692 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:58:37.791815 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 18:58:37.791896 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 18:58:37.791975 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 18:58:37.792051 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:58:37.792135 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:58:37.792145 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:58:37.792152 kernel: Initialise system trusted keyrings Feb 9 18:58:37.792159 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:58:37.792167 kernel: Key type asymmetric registered Feb 9 18:58:37.792175 kernel: Asymmetric key parser 'x509' registered Feb 9 18:58:37.792182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:58:37.792189 kernel: io scheduler mq-deadline registered Feb 9 18:58:37.792196 kernel: io scheduler kyber registered Feb 9 18:58:37.792203 kernel: io scheduler bfq registered Feb 9 18:58:37.792210 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:58:37.792218 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 18:58:37.792225 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 18:58:37.792231 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 18:58:37.792239 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:58:37.792246 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:58:37.792253 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:58:37.792260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:58:37.792267 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:58:37.792344 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 18:58:37.792355 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:58:37.792420 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 18:58:37.792489 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T18:58:37 UTC (1707505117) Feb 9 18:58:37.792555 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 18:58:37.792564 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:58:37.792571 kernel: Segment Routing with IPv6 Feb 9 18:58:37.792578 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:58:37.792585 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:58:37.792592 kernel: Key type dns_resolver registered Feb 9 18:58:37.792598 kernel: IPI shorthand broadcast: enabled Feb 9 18:58:37.792605 kernel: sched_clock: Marking stable (394379943, 74465833)->(476280967, -7435191) Feb 9 18:58:37.792614 kernel: registered taskstats version 1 Feb 9 18:58:37.792621 kernel: Loading compiled-in X.509 certificates Feb 9 18:58:37.792647 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:58:37.792654 kernel: Key type .fscrypt registered Feb 9 18:58:37.792661 kernel: Key type fscrypt-provisioning registered Feb 9 18:58:37.792668 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:58:37.792675 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:58:37.792682 kernel: ima: No architecture policies found Feb 9 18:58:37.792691 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:58:37.792697 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:58:37.792704 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:58:37.792711 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:58:37.792718 kernel: Run /init as init process Feb 9 18:58:37.792725 kernel: with arguments: Feb 9 18:58:37.792732 kernel: /init Feb 9 18:58:37.792739 kernel: with environment: Feb 9 18:58:37.792753 kernel: HOME=/ Feb 9 18:58:37.792761 kernel: TERM=linux Feb 9 18:58:37.792770 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:58:37.792779 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:58:37.792789 systemd[1]: Detected virtualization kvm. Feb 9 18:58:37.792797 systemd[1]: Detected architecture x86-64. Feb 9 18:58:37.792804 systemd[1]: Running in initrd. Feb 9 18:58:37.792811 systemd[1]: No hostname configured, using default hostname. Feb 9 18:58:37.792819 systemd[1]: Hostname set to <localhost>. Feb 9 18:58:37.792828 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:58:37.792835 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:58:37.792843 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:58:37.792850 systemd[1]: Reached target cryptsetup.target. Feb 9 18:58:37.792857 systemd[1]: Reached target paths.target. Feb 9 18:58:37.792865 systemd[1]: Reached target slices.target. Feb 9 18:58:37.792872 systemd[1]: Reached target swap.target. Feb 9 18:58:37.792880 systemd[1]: Reached target timers.target. Feb 9 18:58:37.792889 systemd[1]: Listening on iscsid.socket. Feb 9 18:58:37.792896 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:58:37.792904 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:58:37.792912 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:58:37.792920 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:58:37.792929 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:58:37.792938 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:58:37.792948 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:58:37.792956 systemd[1]: Reached target sockets.target. Feb 9 18:58:37.792963 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:58:37.792970 systemd[1]: Finished network-cleanup.service. Feb 9 18:58:37.792978 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:58:37.792985 systemd[1]: Starting systemd-journald.service... Feb 9 18:58:37.792993 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:58:37.793002 systemd[1]: Starting systemd-resolved.service... Feb 9 18:58:37.793010 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:58:37.793017 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:58:37.793025 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:58:37.793032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:58:37.793040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:58:37.793050 systemd-journald[198]: Journal started Feb 9 18:58:37.793097 systemd-journald[198]: Runtime Journal (/run/log/journal/ad3a732c452d4c7e923a519064f6d5ab) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:58:37.782363 systemd-modules-load[199]: Inserted module 'overlay' Feb 9 18:58:37.813733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:58:37.813752 kernel: Bridge firewalling registered Feb 9 18:58:37.813761 systemd[1]: Started systemd-journald.service. Feb 9 18:58:37.813772 kernel: audit: type=1130 audit(1707505117.811:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.797710 systemd-resolved[200]: Positive Trust Anchors: Feb 9 18:58:37.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.797717 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:58:37.819811 kernel: audit: type=1130 audit(1707505117.815:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.819825 kernel: audit: type=1130 audit(1707505117.817:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.797744 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:58:37.825536 kernel: audit: type=1130 audit(1707505117.821:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.799781 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 9 18:58:37.809495 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 9 18:58:37.815810 systemd[1]: Started systemd-resolved.service. Feb 9 18:58:37.818607 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:58:37.821813 systemd[1]: Reached target nss-lookup.target. Feb 9 18:58:37.829023 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:58:37.830642 kernel: SCSI subsystem initialized Feb 9 18:58:37.841099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:58:37.841140 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:58:37.841151 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:58:37.841250 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:58:37.845176 kernel: audit: type=1130 audit(1707505117.841:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.844427 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:58:37.845163 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 9 18:58:37.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.849645 kernel: audit: type=1130 audit(1707505117.845:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.845596 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:58:37.846878 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:58:37.852638 dracut-cmdline[217]: dracut-dracut-053 Feb 9 18:58:37.854191 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:58:37.854531 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:58:37.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.860642 kernel: audit: type=1130 audit(1707505117.857:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.896651 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:58:37.907648 kernel: iscsi: registered transport (tcp) Feb 9 18:58:37.925962 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:58:37.925986 kernel: QLogic iSCSI HBA Driver Feb 9 18:58:37.946065 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:58:37.949735 kernel: audit: type=1130 audit(1707505117.946:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:37.947573 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:58:37.989648 kernel: raid6: avx2x4 gen() 30218 MB/s Feb 9 18:58:38.006646 kernel: raid6: avx2x4 xor() 7298 MB/s Feb 9 18:58:38.023641 kernel: raid6: avx2x2 gen() 32141 MB/s Feb 9 18:58:38.040641 kernel: raid6: avx2x2 xor() 19317 MB/s Feb 9 18:58:38.057645 kernel: raid6: avx2x1 gen() 26705 MB/s Feb 9 18:58:38.074641 kernel: raid6: avx2x1 xor() 15424 MB/s Feb 9 18:58:38.091642 kernel: raid6: sse2x4 gen() 14854 MB/s Feb 9 18:58:38.108644 kernel: raid6: sse2x4 xor() 7056 MB/s Feb 9 18:58:38.125641 kernel: raid6: sse2x2 gen() 16264 MB/s Feb 9 18:58:38.142641 kernel: raid6: sse2x2 xor() 9886 MB/s Feb 9 18:58:38.159641 kernel: raid6: sse2x1 gen() 12486 MB/s Feb 9 18:58:38.177073 kernel: raid6: sse2x1 xor() 7844 MB/s Feb 9 18:58:38.177083 kernel: raid6: using algorithm avx2x2 gen() 32141 MB/s Feb 9 18:58:38.177091 kernel: raid6: .... xor() 19317 MB/s, rmw enabled Feb 9 18:58:38.177102 kernel: raid6: using avx2x2 recovery algorithm Feb 9 18:58:38.188645 kernel: xor: automatically using best checksumming function avx Feb 9 18:58:38.273662 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:58:38.279243 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:58:38.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:38.281000 audit: BPF prog-id=7 op=LOAD Feb 9 18:58:38.281000 audit: BPF prog-id=8 op=LOAD Feb 9 18:58:38.282519 systemd[1]: Starting systemd-udevd.service... Feb 9 18:58:38.283801 kernel: audit: type=1130 audit(1707505118.279:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:38.293576 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 9 18:58:38.297593 systemd[1]: Started systemd-udevd.service. Feb 9 18:58:38.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:38.299034 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:58:38.308882 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Feb 9 18:58:38.327809 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:58:38.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:38.328495 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:58:38.360281 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:58:38.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:38.390049 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:58:38.391961 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:58:38.391986 kernel: GPT:9289727 != 19775487 Feb 9 18:58:38.391998 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:58:38.392011 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:58:38.394078 kernel: GPT:9289727 != 19775487 Feb 9 18:58:38.394106 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:58:38.394119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:58:38.408960 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:58:38.409009 kernel: AES CTR mode by8 optimization enabled Feb 9 18:58:38.421930 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Feb 9 18:58:38.421900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:58:38.442997 kernel: libata version 3.00 loaded. Feb 9 18:58:38.443021 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 18:58:38.443141 kernel: scsi host0: ata_piix Feb 9 18:58:38.443234 kernel: scsi host1: ata_piix Feb 9 18:58:38.443313 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 18:58:38.443322 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 18:58:38.451195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:58:38.451883 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:58:38.456726 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:58:38.460100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:58:38.461344 systemd[1]: Starting disk-uuid.service... Feb 9 18:58:38.468531 disk-uuid[516]: Primary Header is updated. Feb 9 18:58:38.468531 disk-uuid[516]: Secondary Entries is updated. Feb 9 18:58:38.468531 disk-uuid[516]: Secondary Header is updated. Feb 9 18:58:38.471653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:58:38.474646 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:58:38.591650 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 18:58:38.595464 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 18:58:38.622646 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 18:58:38.622761 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:58:38.639659 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 18:58:39.474656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:58:39.474980 disk-uuid[517]: The operation has completed successfully. Feb 9 18:58:39.499162 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:58:39.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.499245 systemd[1]: Finished disk-uuid.service. Feb 9 18:58:39.503784 systemd[1]: Starting verity-setup.service... Feb 9 18:58:39.514652 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 18:58:39.531786 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:58:39.532937 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:58:39.534563 systemd[1]: Finished verity-setup.service. Feb 9 18:58:39.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.588171 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:58:39.589071 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:58:39.588312 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:58:39.589101 systemd[1]: Starting ignition-setup.service... Feb 9 18:58:39.590843 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:58:39.599121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:58:39.599148 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:58:39.599157 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:58:39.605344 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:58:39.642155 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:58:39.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.643000 audit: BPF prog-id=9 op=LOAD Feb 9 18:58:39.643620 systemd[1]: Starting systemd-networkd.service... Feb 9 18:58:39.661337 systemd-networkd[701]: lo: Link UP Feb 9 18:58:39.661344 systemd-networkd[701]: lo: Gained carrier Feb 9 18:58:39.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.661707 systemd-networkd[701]: Enumeration completed Feb 9 18:58:39.661765 systemd[1]: Started systemd-networkd.service. Feb 9 18:58:39.661874 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:58:39.672817 systemd-networkd[701]: eth0: Link UP Feb 9 18:58:39.672823 systemd-networkd[701]: eth0: Gained carrier Feb 9 18:58:39.673465 systemd[1]: Reached target network.target. Feb 9 18:58:39.674125 systemd[1]: Starting iscsiuio.service... Feb 9 18:58:39.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.677656 systemd[1]: Started iscsiuio.service. Feb 9 18:58:39.678669 systemd[1]: Starting iscsid.service... Feb 9 18:58:39.681438 iscsid[706]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:58:39.681438 iscsid[706]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 9 18:58:39.681438 iscsid[706]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:58:39.681438 iscsid[706]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:58:39.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.688540 iscsid[706]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:58:39.688540 iscsid[706]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:58:39.682371 systemd[1]: Started iscsid.service. Feb 9 18:58:39.683041 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:58:39.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.690713 systemd-networkd[701]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:58:39.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.690813 systemd[1]: Finished ignition-setup.service. Feb 9 18:58:39.692096 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:58:39.692809 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:58:39.694804 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:58:39.696123 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:58:39.696883 systemd[1]: Reached target remote-fs.target. Feb 9 18:58:39.698735 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:58:39.705441 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:58:39.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.724725 ignition[716]: Ignition 2.14.0 Feb 9 18:58:39.724733 ignition[716]: Stage: fetch-offline Feb 9 18:58:39.724774 ignition[716]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:39.724782 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:39.724867 ignition[716]: parsed url from cmdline: "" Feb 9 18:58:39.724870 ignition[716]: no config URL provided Feb 9 18:58:39.724874 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:58:39.724880 ignition[716]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:58:39.724893 ignition[716]: op(1): [started] loading QEMU firmware config module Feb 9 18:58:39.724897 ignition[716]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:58:39.728404 ignition[716]: op(1): [finished] loading QEMU firmware config module Feb 9 18:58:39.739718 ignition[716]: parsing config with SHA512: 262b98da580bcf5be0c4c727e85a6e644e9ad23d217a0d240d3cafa1da81d4aab39d9fb525f7ad22dc46d7bcbf4ac4d58a87be1c4efb8853cd45b517020fc667 Feb 9 18:58:39.754031 unknown[716]: fetched base config from "system" Feb 9 18:58:39.754043 unknown[716]: fetched user config from "qemu" Feb 9 18:58:39.754424 ignition[716]: fetch-offline: fetch-offline passed Feb 9 18:58:39.755433 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:58:39.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.754471 ignition[716]: Ignition finished successfully Feb 9 18:58:39.756573 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:58:39.757667 systemd[1]: Starting ignition-kargs.service... Feb 9 18:58:39.766954 ignition[729]: Ignition 2.14.0 Feb 9 18:58:39.767689 ignition[729]: Stage: kargs Feb 9 18:58:39.768132 ignition[729]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:39.768141 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:39.770190 ignition[729]: kargs: kargs passed Feb 9 18:58:39.770230 ignition[729]: Ignition finished successfully Feb 9 18:58:39.771551 systemd[1]: Finished ignition-kargs.service. Feb 9 18:58:39.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.773351 systemd[1]: Starting ignition-disks.service... Feb 9 18:58:39.779045 ignition[735]: Ignition 2.14.0 Feb 9 18:58:39.779052 ignition[735]: Stage: disks Feb 9 18:58:39.779123 ignition[735]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:39.779131 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:39.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.780492 systemd[1]: Finished ignition-disks.service. Feb 9 18:58:39.779955 ignition[735]: disks: disks passed Feb 9 18:58:39.781284 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:58:39.779981 ignition[735]: Ignition finished successfully Feb 9 18:58:39.782526 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:58:39.783102 systemd[1]: Reached target local-fs.target. Feb 9 18:58:39.784156 systemd[1]: Reached target sysinit.target. Feb 9 18:58:39.784199 systemd[1]: Reached target basic.target. Feb 9 18:58:39.784858 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:58:39.793027 systemd-fsck[743]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:58:39.797380 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:58:39.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.799032 systemd[1]: Mounting sysroot.mount... Feb 9 18:58:39.804436 systemd[1]: Mounted sysroot.mount. Feb 9 18:58:39.806829 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:58:39.804550 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:58:39.805328 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:58:39.805745 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:58:39.805776 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:58:39.805794 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:58:39.807562 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:58:39.809000 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:58:39.812763 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:58:39.814996 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:58:39.817354 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:58:39.820438 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:58:39.841348 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:58:39.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.842567 systemd[1]: Starting ignition-mount.service... Feb 9 18:58:39.843532 systemd[1]: Starting sysroot-boot.service... Feb 9 18:58:39.846700 bash[794]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:58:39.853364 ignition[796]: INFO : Ignition 2.14.0 Feb 9 18:58:39.853364 ignition[796]: INFO : Stage: mount Feb 9 18:58:39.854498 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:39.854498 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:39.854498 ignition[796]: INFO : mount: mount passed Feb 9 18:58:39.854498 ignition[796]: INFO : Ignition finished successfully Feb 9 18:58:39.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:39.854994 systemd[1]: Finished ignition-mount.service. Feb 9 18:58:39.859246 systemd[1]: Finished sysroot-boot.service. Feb 9 18:58:39.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:40.541254 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:58:40.547043 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Feb 9 18:58:40.547070 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:58:40.547080 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:58:40.548140 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:58:40.550786 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:58:40.551891 systemd[1]: Starting ignition-files.service... Feb 9 18:58:40.564672 ignition[824]: INFO : Ignition 2.14.0 Feb 9 18:58:40.564672 ignition[824]: INFO : Stage: files Feb 9 18:58:40.565834 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:40.565834 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:40.567411 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:58:40.568669 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:58:40.568669 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:58:40.570848 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:58:40.571811 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:58:40.571811 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:58:40.571426 unknown[824]: wrote ssh authorized keys file for user: core Feb 9 18:58:40.574319 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:58:40.574319 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 18:58:40.941561 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:58:41.098047 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 18:58:41.100226 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:58:41.100226 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:58:41.100226 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:58:41.149783 systemd-networkd[701]: eth0: Gained IPv6LL Feb 9 18:58:41.391689 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:58:41.472462 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 18:58:41.472462 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:58:41.472462 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:58:41.472462 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:58:41.540365 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:58:41.874671 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 18:58:41.874671 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:58:41.878008 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:58:41.878008 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:58:41.922059 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:58:42.440763 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:58:42.443306 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 18:58:42.443306 ignition[824]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:58:42.463676 ignition[824]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:58:42.482195 ignition[824]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:58:42.483527 ignition[824]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:58:42.483527 ignition[824]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:58:42.483527 ignition[824]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:58:42.483527 ignition[824]: INFO : files: files passed Feb 9 18:58:42.483527 ignition[824]: INFO : Ignition finished successfully Feb 9 18:58:42.496263 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:58:42.496289 kernel: audit: type=1130 audit(1707505122.484:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.496301 kernel: audit: type=1130 audit(1707505122.492:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.496311 kernel: audit: type=1130 audit(1707505122.495:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.483613 systemd[1]: Finished ignition-files.service. Feb 9 18:58:42.502048 kernel: audit: type=1131 audit(1707505122.495:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.486044 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:58:42.489846 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:58:42.504522 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:58:42.490675 systemd[1]: Starting ignition-quench.service... Feb 9 18:58:42.506287 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:58:42.491694 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:58:42.492990 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:58:42.493049 systemd[1]: Finished ignition-quench.service. Feb 9 18:58:42.496351 systemd[1]: Reached target ignition-complete.target. Feb 9 18:58:42.501495 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:58:42.514156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:58:42.519465 kernel: audit: type=1130 audit(1707505122.513:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.519481 kernel: audit: type=1131 audit(1707505122.513:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.514228 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:58:42.514355 systemd[1]: Reached target initrd-fs.target. Feb 9 18:58:42.520064 systemd[1]: Reached target initrd.target. Feb 9 18:58:42.520672 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:58:42.521281 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:58:42.530591 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:58:42.534150 kernel: audit: type=1130 audit(1707505122.530:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.531794 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:58:42.539607 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:58:42.566723 kernel: audit: type=1131 audit(1707505122.539:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.566738 kernel: audit: type=1131 audit(1707505122.539:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.566748 kernel: audit: type=1131 audit(1707505122.543:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.567107 iscsid[706]: iscsid shutting down. Feb 9 18:58:42.539753 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:58:42.568364 ignition[865]: INFO : Ignition 2.14.0 Feb 9 18:58:42.568364 ignition[865]: INFO : Stage: umount Feb 9 18:58:42.568364 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:58:42.568364 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:58:42.568364 ignition[865]: INFO : umount: umount passed Feb 9 18:58:42.568364 ignition[865]: INFO : Ignition finished successfully Feb 9 18:58:42.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.539881 systemd[1]: Stopped target timers.target. Feb 9 18:58:42.539988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:58:42.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.540072 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:58:42.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.540289 systemd[1]: Stopped target initrd.target. Feb 9 18:58:42.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.542527 systemd[1]: Stopped target basic.target. Feb 9 18:58:42.542647 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:58:42.542750 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:58:42.542858 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:58:42.583000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:58:42.542987 systemd[1]: Stopped target remote-fs.target. Feb 9 18:58:42.543097 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:58:42.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.543214 systemd[1]: Stopped target sysinit.target. Feb 9 18:58:42.543325 systemd[1]: Stopped target local-fs.target. Feb 9 18:58:42.543435 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:58:42.543547 systemd[1]: Stopped target swap.target. Feb 9 18:58:42.543646 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:58:42.543729 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:58:42.543933 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:58:42.546125 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:58:42.546204 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:58:42.546388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:58:42.546465 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:58:42.548706 systemd[1]: Stopped target paths.target. Feb 9 18:58:42.548837 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:58:42.550666 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:58:42.550894 systemd[1]: Stopped target slices.target. Feb 9 18:58:42.551004 systemd[1]: Stopped target sockets.target. Feb 9 18:58:42.551116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:58:42.551198 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:58:42.551358 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:58:42.551432 systemd[1]: Stopped ignition-files.service. Feb 9 18:58:42.552285 systemd[1]: Stopping ignition-mount.service... Feb 9 18:58:42.552711 systemd[1]: Stopping iscsid.service... Feb 9 18:58:42.553458 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:58:42.553667 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:58:42.553788 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:58:42.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.554122 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:58:42.554228 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:58:42.556904 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:58:42.556986 systemd[1]: Stopped iscsid.service. Feb 9 18:58:42.557400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:58:42.557452 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:58:42.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.558324 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:58:42.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.558350 systemd[1]: Closed iscsid.socket. Feb 9 18:58:42.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561213 systemd[1]: Stopping iscsiuio.service... Feb 9 18:58:42.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.561446 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:58:42.561520 systemd[1]: Stopped iscsiuio.service. Feb 9 18:58:42.561997 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:58:42.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:42.562059 systemd[1]: Stopped ignition-mount.service. Feb 9 18:58:42.562347 systemd[1]: Stopped target network.target. Feb 9 18:58:42.562415 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:58:42.562436 systemd[1]: Closed iscsiuio.socket. Feb 9 18:58:42.562526 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:58:42.562552 systemd[1]: Stopped ignition-disks.service. Feb 9 18:58:42.562803 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:58:42.562827 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:58:42.562919 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:58:42.562950 systemd[1]: Stopped ignition-setup.service. Feb 9 18:58:42.563106 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:58:42.563259 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:58:42.567297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:58:42.568814 systemd-networkd[701]: eth0: DHCPv6 lease lost Feb 9 18:58:42.621000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:58:42.568934 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:58:42.569008 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:58:42.570246 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:58:42.570317 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:58:42.572820 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:58:42.572858 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:58:42.573624 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:58:42.573702 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:58:42.575404 systemd[1]: Stopping network-cleanup.service... Feb 9 18:58:42.575911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:58:42.575955 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:58:42.576710 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:58:42.576743 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:58:42.577996 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:58:42.578025 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:58:42.578154 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:58:42.578931 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:58:42.579284 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:58:42.579351 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:58:42.584604 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:58:42.584698 systemd[1]: Stopped network-cleanup.service. Feb 9 18:58:42.599479 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:58:42.599580 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:58:42.600678 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:58:42.600706 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:58:42.601445 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:58:42.601479 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:58:42.602922 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:58:42.602963 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:58:42.603521 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:58:42.603549 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:58:42.603853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:58:42.603878 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:58:42.643753 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 9 18:58:42.604578 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:58:42.607001 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:58:42.607040 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:58:42.608741 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:58:42.608773 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:58:42.608850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:58:42.608877 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:58:42.610977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:58:42.611295 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:58:42.611355 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:58:42.611569 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:58:42.612347 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:58:42.627402 systemd[1]: Switching root. Feb 9 18:58:42.651145 systemd-journald[198]: Journal stopped Feb 9 18:58:45.455860 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:58:45.455950 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:58:45.455962 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:58:45.455975 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:58:45.455984 kernel: SELinux: policy capability open_perms=1 Feb 9 18:58:45.455996 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:58:45.456006 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:58:45.456016 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:58:45.456029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:58:45.456039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:58:45.456049 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:58:45.456064 systemd[1]: Successfully loaded SELinux policy in 35.120ms. Feb 9 18:58:45.456587 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.184ms. Feb 9 18:58:45.456602 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:58:45.456615 systemd[1]: Detected virtualization kvm. Feb 9 18:58:45.456625 systemd[1]: Detected architecture x86-64. Feb 9 18:58:45.456645 systemd[1]: Detected first boot. Feb 9 18:58:45.456656 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:58:45.456666 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:58:45.456676 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:58:45.456687 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:58:45.456700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:58:45.456712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:58:45.456723 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:58:45.456733 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:58:45.456746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:58:45.456756 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:58:45.456767 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:58:45.456777 systemd[1]: Created slice system-getty.slice. Feb 9 18:58:45.456789 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:58:45.456799 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:58:45.456809 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:58:45.456820 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:58:45.456830 systemd[1]: Created slice user.slice. Feb 9 18:58:45.456840 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:58:45.456850 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:58:45.456861 systemd[1]: Set up automount boot.automount. Feb 9 18:58:45.456879 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:58:45.456890 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:58:45.456901 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:58:45.456912 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:58:45.456922 systemd[1]: Reached target integritysetup.target. Feb 9 18:58:45.456932 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:58:45.456946 systemd[1]: Reached target remote-fs.target. Feb 9 18:58:45.456956 systemd[1]: Reached target slices.target. Feb 9 18:58:45.456966 systemd[1]: Reached target swap.target. Feb 9 18:58:45.456976 systemd[1]: Reached target torcx.target. Feb 9 18:58:45.456988 systemd[1]: Reached target veritysetup.target. Feb 9 18:58:45.456998 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:58:45.457008 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:58:45.457018 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:58:45.457029 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:58:45.457039 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:58:45.457050 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:58:45.457060 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:58:45.457070 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:58:45.457081 systemd[1]: Mounting media.mount... Feb 9 18:58:45.457092 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:58:45.457102 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:58:45.457113 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:58:45.457123 systemd[1]: Mounting tmp.mount... Feb 9 18:58:45.457134 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:58:45.457144 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:58:45.457155 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:58:45.457165 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:58:45.457177 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:58:45.457187 systemd[1]: Starting modprobe@drm.service... Feb 9 18:58:45.457198 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:58:45.457208 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:58:45.457219 systemd[1]: Starting modprobe@loop.service... Feb 9 18:58:45.457230 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:58:45.457772 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:58:45.457785 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:58:45.457796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:58:45.457808 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:58:45.457819 systemd[1]: Stopped systemd-journald.service. Feb 9 18:58:45.457829 kernel: fuse: init (API version 7.34) Feb 9 18:58:45.457839 kernel: loop: module loaded Feb 9 18:58:45.457849 systemd[1]: Starting systemd-journald.service... Feb 9 18:58:45.457859 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:58:45.457876 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:58:45.457887 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:58:45.457897 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:58:45.457909 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:58:45.457919 systemd[1]: Stopped verity-setup.service. Feb 9 18:58:45.457930 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:58:45.457940 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:58:45.457950 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:58:45.457960 systemd[1]: Mounted media.mount. Feb 9 18:58:45.457970 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:58:45.457980 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:58:45.457990 systemd[1]: Mounted tmp.mount. Feb 9 18:58:45.458004 systemd-journald[976]: Journal started Feb 9 18:58:45.458043 systemd-journald[976]: Runtime Journal (/run/log/journal/ad3a732c452d4c7e923a519064f6d5ab) is 6.0M, max 48.5M, 42.5M free. Feb 9 18:58:42.696000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:58:43.256000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:58:43.256000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:58:43.256000 audit: BPF prog-id=10 op=LOAD Feb 9 18:58:43.256000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:58:43.256000 audit: BPF prog-id=11 op=LOAD Feb 9 18:58:43.256000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:58:43.282000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:58:43.282000 audit[899]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:58:43.282000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:58:43.284000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:58:43.284000 audit[899]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b9 a2=1ed a3=0 items=2 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:58:43.284000 audit: CWD cwd="/" Feb 9 18:58:43.284000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:43.284000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:43.284000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:58:45.349000 audit: BPF prog-id=12 op=LOAD Feb 9 18:58:45.349000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:58:45.350000 audit: BPF prog-id=13 op=LOAD Feb 9 18:58:45.350000 audit: BPF prog-id=14 op=LOAD Feb 9 18:58:45.350000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:58:45.350000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:58:45.351000 audit: BPF prog-id=15 op=LOAD Feb 9 18:58:45.351000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:58:45.351000 audit: BPF prog-id=16 op=LOAD Feb 9 18:58:45.351000 audit: BPF prog-id=17 op=LOAD Feb 9 18:58:45.351000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:58:45.351000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:58:45.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.364000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:58:45.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.434000 audit: BPF prog-id=18 op=LOAD Feb 9 18:58:45.434000 audit: BPF prog-id=19 op=LOAD Feb 9 18:58:45.434000 audit: BPF prog-id=20 op=LOAD Feb 9 18:58:45.434000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:58:45.434000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:58:45.459906 systemd[1]: Started systemd-journald.service. Feb 9 18:58:45.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.453000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:58:45.453000 audit[976]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdf3fe23f0 a2=4000 a3=7ffdf3fe248c items=0 ppid=1 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:58:45.453000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:58:45.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.349125 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:58:43.282003 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:58:45.349136 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:58:43.282178 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:58:45.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.352369 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:58:43.282193 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:58:45.460224 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:58:43.282217 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:58:45.461005 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:58:43.282225 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:58:45.461177 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:58:45.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:43.282252 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:58:45.461956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:58:43.282262 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:58:45.462136 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:58:43.282440 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:58:45.463035 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:58:43.282469 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:58:45.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.463822 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:58:43.282479 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:58:45.464013 systemd[1]: Finished modprobe@drm.service. Feb 9 18:58:43.282799 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:58:45.464806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:58:43.282833 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:58:45.464977 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:58:43.282853 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:58:45.465882 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:58:43.282867 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:58:45.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.466039 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:58:43.282892 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:58:43.282910 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:58:45.099567 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:58:45.466848 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:58:45.099825 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:58:45.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.467005 systemd[1]: Finished modprobe@loop.service. Feb 9 18:58:45.099923 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:58:45.100066 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:58:45.100111 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:58:45.100160 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-09T18:58:45Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:58:45.468037 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:58:45.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.468886 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:58:45.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.469807 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:58:45.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.470829 systemd[1]: Reached target network-pre.target. Feb 9 18:58:45.472366 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:58:45.473841 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:58:45.474356 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:58:45.475479 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:58:45.476929 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:58:45.477516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:58:45.478406 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:58:45.479042 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:58:45.485468 systemd-journald[976]: Time spent on flushing to /var/log/journal/ad3a732c452d4c7e923a519064f6d5ab is 21.690ms for 1106 entries. Feb 9 18:58:45.485468 systemd-journald[976]: System Journal (/var/log/journal/ad3a732c452d4c7e923a519064f6d5ab) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:58:45.521640 systemd-journald[976]: Received client request to flush runtime journal. Feb 9 18:58:45.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.479791 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:58:45.481077 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:58:45.483179 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:58:45.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.483852 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:58:45.491261 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:58:45.523435 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:58:45.492215 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:58:45.492871 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:58:45.499740 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:58:45.501382 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:58:45.506027 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:58:45.507564 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:58:45.516659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:58:45.522284 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:58:45.905993 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:58:45.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.906000 audit: BPF prog-id=21 op=LOAD Feb 9 18:58:45.906000 audit: BPF prog-id=22 op=LOAD Feb 9 18:58:45.906000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:58:45.906000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:58:45.907821 systemd[1]: Starting systemd-udevd.service... Feb 9 18:58:45.921650 systemd-udevd[1007]: Using default interface naming scheme 'v252'. Feb 9 18:58:45.932559 systemd[1]: Started systemd-udevd.service. Feb 9 18:58:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.933000 audit: BPF prog-id=23 op=LOAD Feb 9 18:58:45.934845 systemd[1]: Starting systemd-networkd.service... Feb 9 18:58:45.938000 audit: BPF prog-id=24 op=LOAD Feb 9 18:58:45.938000 audit: BPF prog-id=25 op=LOAD Feb 9 18:58:45.938000 audit: BPF prog-id=26 op=LOAD Feb 9 18:58:45.939524 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:58:45.961336 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 18:58:45.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:45.964904 systemd[1]: Started systemd-userdbd.service. Feb 9 18:58:45.985664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 18:58:45.990645 kernel: ACPI: button: Power Button [PWRF] Feb 9 18:58:46.009000 audit[1008]: AVC avc: denied { confidentiality } for pid=1008 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:58:46.023249 systemd-networkd[1014]: lo: Link UP Feb 9 18:58:46.023264 systemd-networkd[1014]: lo: Gained carrier Feb 9 18:58:46.023804 systemd-networkd[1014]: Enumeration completed Feb 9 18:58:46.023925 systemd[1]: Started systemd-networkd.service. Feb 9 18:58:46.023944 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:58:46.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.025035 systemd-networkd[1014]: eth0: Link UP Feb 9 18:58:46.025046 systemd-networkd[1014]: eth0: Gained carrier Feb 9 18:58:46.009000 audit[1008]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5627d8d5c0b0 a1=32194 a2=7f0d8d6a8bc5 a3=5 items=108 ppid=1007 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:58:46.009000 audit: CWD cwd="/" Feb 9 18:58:46.009000 audit: PATH item=0 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=1 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=2 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=3 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=4 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=5 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=6 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=7 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=8 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=9 name=(null) inode=13898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=10 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=11 name=(null) inode=13899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=12 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=13 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=14 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=15 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=16 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=17 name=(null) inode=13902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=18 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=19 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=20 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=21 name=(null) inode=13904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=22 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=23 name=(null) inode=13905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=24 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=25 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=26 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=27 name=(null) inode=13907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=28 name=(null) inode=13903 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=29 name=(null) inode=13908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=30 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=31 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=32 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=33 name=(null) inode=13910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=34 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=35 name=(null) inode=13911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=36 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=37 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=38 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=39 name=(null) inode=13913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=40 name=(null) inode=13909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=41 name=(null) inode=13914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=42 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=43 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=44 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=45 name=(null) inode=13916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=46 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=47 name=(null) inode=13917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=48 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=49 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=50 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=51 name=(null) inode=13919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=52 name=(null) inode=13915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=53 name=(null) inode=13920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=54 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=55 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=56 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=57 name=(null) inode=13922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=58 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=59 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=60 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=61 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=62 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=63 name=(null) inode=13925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=64 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=65 name=(null) inode=13926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=66 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=67 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=68 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=69 name=(null) inode=13928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=70 name=(null) inode=13924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=71 name=(null) inode=13929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=72 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=73 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=74 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=75 name=(null) inode=13931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=76 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=77 name=(null) inode=13932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=78 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=79 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=80 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=81 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=82 name=(null) inode=13930 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=83 name=(null) inode=13935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=84 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=85 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=86 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=87 name=(null) inode=13937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=88 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=89 name=(null) inode=13938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=90 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=91 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=92 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=93 name=(null) inode=13940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=94 name=(null) inode=13936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=95 name=(null) inode=13941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=96 name=(null) inode=13921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=97 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=98 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=99 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=100 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=101 name=(null) inode=13944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=102 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=103 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=104 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=105 name=(null) inode=13946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=106 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PATH item=107 name=(null) inode=13947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:58:46.009000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:58:46.033725 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 18:58:46.038135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:58:46.038772 systemd-networkd[1014]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:58:46.053656 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 18:58:46.058647 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:58:46.108741 kernel: kvm: Nested Virtualization enabled Feb 9 18:58:46.108835 kernel: SVM: kvm: Nested Paging enabled Feb 9 18:58:46.108867 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 18:58:46.108906 kernel: SVM: Virtual GIF supported Feb 9 18:58:46.127668 kernel: EDAC MC: Ver: 3.0.0 Feb 9 18:58:46.150025 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:58:46.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.151909 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:58:46.159036 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:58:46.184706 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:58:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.185455 systemd[1]: Reached target cryptsetup.target. Feb 9 18:58:46.187046 systemd[1]: Starting lvm2-activation.service... Feb 9 18:58:46.190595 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:58:46.215539 systemd[1]: Finished lvm2-activation.service. Feb 9 18:58:46.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.216297 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:58:46.216928 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:58:46.216951 systemd[1]: Reached target local-fs.target. Feb 9 18:58:46.217511 systemd[1]: Reached target machines.target. Feb 9 18:58:46.219071 systemd[1]: Starting ldconfig.service... Feb 9 18:58:46.219794 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:58:46.219839 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:58:46.220625 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:58:46.222053 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:58:46.224338 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:58:46.225806 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:58:46.225851 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:58:46.226872 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:58:46.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.230600 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Feb 9 18:58:46.231497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:58:46.232577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:58:46.236334 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:58:46.238242 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:58:46.239666 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:58:46.262526 systemd-fsck[1053]: fsck.fat 4.2 (2021-01-31) Feb 9 18:58:46.262526 systemd-fsck[1053]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 18:58:46.267370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:58:46.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.270087 systemd[1]: Mounting boot.mount... Feb 9 18:58:46.277104 systemd[1]: Mounted boot.mount. Feb 9 18:58:46.458547 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:58:46.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.473615 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:58:46.474189 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:58:46.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.511294 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:58:46.517330 systemd[1]: Finished ldconfig.service. Feb 9 18:58:46.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.521252 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:58:46.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.523613 systemd[1]: Starting audit-rules.service... Feb 9 18:58:46.525541 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:58:46.527071 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:58:46.528000 audit: BPF prog-id=27 op=LOAD Feb 9 18:58:46.529232 systemd[1]: Starting systemd-resolved.service... Feb 9 18:58:46.531000 audit: BPF prog-id=28 op=LOAD Feb 9 18:58:46.532477 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:58:46.534050 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:58:46.535187 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:58:46.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.536134 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:58:46.539000 audit[1067]: SYSTEM_BOOT pid=1067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.542318 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:58:46.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.545320 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:58:46.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.547455 systemd[1]: Starting systemd-update-done.service... Feb 9 18:58:46.554247 systemd[1]: Finished systemd-update-done.service. Feb 9 18:58:46.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:58:46.556000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:58:46.556000 audit[1078]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc35a17c80 a2=420 a3=0 items=0 ppid=1056 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:58:46.556000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:58:46.557848 augenrules[1078]: No rules Feb 9 18:58:46.558345 systemd[1]: Finished audit-rules.service. Feb 9 18:58:46.585809 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:58:46.586805 systemd[1]: Reached target time-set.target. Feb 9 18:58:46.586860 systemd-resolved[1061]: Positive Trust Anchors: Feb 9 18:58:46.586873 systemd-resolved[1061]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:58:46.586908 systemd-resolved[1061]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:58:47.104710 systemd-timesyncd[1066]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:58:47.104969 systemd-timesyncd[1066]: Initial clock synchronization to Fri 2024-02-09 18:58:47.104635 UTC. Feb 9 18:58:47.110972 systemd-resolved[1061]: Defaulting to hostname 'linux'. Feb 9 18:58:47.112374 systemd[1]: Started systemd-resolved.service. Feb 9 18:58:47.113286 systemd[1]: Reached target network.target. Feb 9 18:58:47.113924 systemd[1]: Reached target nss-lookup.target. Feb 9 18:58:47.114674 systemd[1]: Reached target sysinit.target. Feb 9 18:58:47.115436 systemd[1]: Started motdgen.path. Feb 9 18:58:47.116085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:58:47.117214 systemd[1]: Started logrotate.timer. Feb 9 18:58:47.118019 systemd[1]: Started mdadm.timer. Feb 9 18:58:47.118659 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:58:47.119481 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:58:47.119513 systemd[1]: Reached target paths.target. Feb 9 18:58:47.120233 systemd[1]: Reached target timers.target. Feb 9 18:58:47.121353 systemd[1]: Listening on dbus.socket. Feb 9 18:58:47.123075 systemd[1]: Starting docker.socket... Feb 9 18:58:47.126085 systemd[1]: Listening on sshd.socket. Feb 9 18:58:47.126753 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:58:47.127123 systemd[1]: Listening on docker.socket. Feb 9 18:58:47.127772 systemd[1]: Reached target sockets.target. Feb 9 18:58:47.128378 systemd[1]: Reached target basic.target. Feb 9 18:58:47.128978 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:58:47.128998 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:58:47.129841 systemd[1]: Starting containerd.service... Feb 9 18:58:47.131221 systemd[1]: Starting dbus.service... Feb 9 18:58:47.132519 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:58:47.134093 systemd[1]: Starting extend-filesystems.service... Feb 9 18:58:47.135006 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:58:47.135993 systemd[1]: Starting motdgen.service... Feb 9 18:58:47.137968 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:58:47.139575 jq[1088]: false Feb 9 18:58:47.139640 systemd[1]: Starting prepare-critools.service... Feb 9 18:58:47.141014 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:58:47.142932 systemd[1]: Starting sshd-keygen.service... Feb 9 18:58:47.145553 systemd[1]: Starting systemd-logind.service... Feb 9 18:58:47.146350 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:58:47.146388 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:58:47.146720 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:58:47.148186 systemd[1]: Starting update-engine.service... Feb 9 18:58:47.149401 extend-filesystems[1089]: Found sr0 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda1 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda2 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda3 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found usr Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda4 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda6 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda7 Feb 9 18:58:47.149401 extend-filesystems[1089]: Found vda9 Feb 9 18:58:47.149401 extend-filesystems[1089]: Checking size of /dev/vda9 Feb 9 18:58:47.149910 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:58:47.158179 dbus-daemon[1087]: [system] SELinux support is enabled Feb 9 18:58:47.152416 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:58:47.168503 jq[1108]: true Feb 9 18:58:47.152586 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:58:47.153805 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:58:47.168781 tar[1110]: ./ Feb 9 18:58:47.168781 tar[1110]: ./loopback Feb 9 18:58:47.153939 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:58:47.169024 tar[1111]: crictl Feb 9 18:58:47.158295 systemd[1]: Started dbus.service. Feb 9 18:58:47.169313 jq[1113]: true Feb 9 18:58:47.160738 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:58:47.160762 systemd[1]: Reached target system-config.target. Feb 9 18:58:47.161680 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:58:47.161691 systemd[1]: Reached target user-config.target. Feb 9 18:58:47.166162 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:58:47.166304 systemd[1]: Finished motdgen.service. Feb 9 18:58:47.197511 update_engine[1103]: I0209 18:58:47.193631 1103 main.cc:92] Flatcar Update Engine starting Feb 9 18:58:47.197761 extend-filesystems[1089]: Resized partition /dev/vda9 Feb 9 18:58:47.199656 extend-filesystems[1140]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:58:47.204030 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:58:47.200047 systemd[1]: Started update-engine.service. Feb 9 18:58:47.204141 update_engine[1103]: I0209 18:58:47.200973 1103 update_check_scheduler.cc:74] Next update check in 9m20s Feb 9 18:58:47.203314 systemd[1]: Started locksmithd.service. Feb 9 18:58:47.217639 systemd-logind[1100]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 18:58:47.217866 systemd-logind[1100]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:58:47.218775 systemd-logind[1100]: New seat seat0. Feb 9 18:58:47.223074 env[1114]: time="2024-02-09T18:58:47.221475574Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:58:47.225067 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:58:47.227396 systemd[1]: Started systemd-logind.service. Feb 9 18:58:47.247423 env[1114]: time="2024-02-09T18:58:47.242475138Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:58:47.251469 extend-filesystems[1140]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:58:47.251469 extend-filesystems[1140]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:58:47.251469 extend-filesystems[1140]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:58:47.256021 extend-filesystems[1089]: Resized filesystem in /dev/vda9 Feb 9 18:58:47.256754 tar[1110]: ./bandwidth Feb 9 18:58:47.251643 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:58:47.256864 bash[1139]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:58:47.251819 systemd[1]: Finished extend-filesystems.service. Feb 9 18:58:47.256761 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:58:47.260958 locksmithd[1142]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:58:47.261303 env[1114]: time="2024-02-09T18:58:47.261269788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264193 env[1114]: time="2024-02-09T18:58:47.264170918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264281 env[1114]: time="2024-02-09T18:58:47.264260847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264529 env[1114]: time="2024-02-09T18:58:47.264510465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264610 env[1114]: time="2024-02-09T18:58:47.264592579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264690 env[1114]: time="2024-02-09T18:58:47.264670675Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:58:47.264759 env[1114]: time="2024-02-09T18:58:47.264741658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.264888 env[1114]: time="2024-02-09T18:58:47.264871031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.265163 env[1114]: time="2024-02-09T18:58:47.265146818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:58:47.265338 env[1114]: time="2024-02-09T18:58:47.265320303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:58:47.265418 env[1114]: time="2024-02-09T18:58:47.265397919Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:58:47.265533 env[1114]: time="2024-02-09T18:58:47.265514728Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:58:47.265604 env[1114]: time="2024-02-09T18:58:47.265587143Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:58:47.272374 env[1114]: time="2024-02-09T18:58:47.272326820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:58:47.272426 env[1114]: time="2024-02-09T18:58:47.272382735Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:58:47.272426 env[1114]: time="2024-02-09T18:58:47.272400278Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:58:47.272468 env[1114]: time="2024-02-09T18:58:47.272438900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272468 env[1114]: time="2024-02-09T18:58:47.272457806Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272506 env[1114]: time="2024-02-09T18:58:47.272475198Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272506 env[1114]: time="2024-02-09T18:58:47.272491359Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272548 env[1114]: time="2024-02-09T18:58:47.272507649Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272548 env[1114]: time="2024-02-09T18:58:47.272523168Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272548 env[1114]: time="2024-02-09T18:58:47.272538237Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272604 env[1114]: time="2024-02-09T18:58:47.272552574Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.272604 env[1114]: time="2024-02-09T18:58:47.272568874Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:58:47.272733 env[1114]: time="2024-02-09T18:58:47.272711181Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:58:47.272809 env[1114]: time="2024-02-09T18:58:47.272787624Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:58:47.273055 env[1114]: time="2024-02-09T18:58:47.273023056Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:58:47.273097 env[1114]: time="2024-02-09T18:58:47.273070845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273097 env[1114]: time="2024-02-09T18:58:47.273091454Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:58:47.273157 env[1114]: time="2024-02-09T18:58:47.273135797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273157 env[1114]: time="2024-02-09T18:58:47.273151657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273163068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273173488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273184108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273194708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273205718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273229 env[1114]: time="2024-02-09T18:58:47.273216108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273363 env[1114]: time="2024-02-09T18:58:47.273230224Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:58:47.273363 env[1114]: time="2024-02-09T18:58:47.273332145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273363 env[1114]: time="2024-02-09T18:58:47.273345370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273363 env[1114]: time="2024-02-09T18:58:47.273355980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273443 env[1114]: time="2024-02-09T18:58:47.273365378Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:58:47.273443 env[1114]: time="2024-02-09T18:58:47.273379163Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:58:47.273443 env[1114]: time="2024-02-09T18:58:47.273388892Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:58:47.273443 env[1114]: time="2024-02-09T18:58:47.273406074Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:58:47.273443 env[1114]: time="2024-02-09T18:58:47.273439206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:58:47.273662 env[1114]: time="2024-02-09T18:58:47.273610086Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:58:47.273662 env[1114]: time="2024-02-09T18:58:47.273662675Z" level=info msg="Connect containerd service" Feb 9 18:58:47.274445 env[1114]: time="2024-02-09T18:58:47.273691860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:58:47.274445 env[1114]: time="2024-02-09T18:58:47.274394557Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274592839Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274578081Z" level=info msg="Start subscribing containerd event" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274627013Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274634637Z" level=info msg="Start recovering state" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274687507Z" level=info msg="Start event monitor" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274696704Z" level=info msg="Start snapshots syncer" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274704859Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274711762Z" level=info msg="Start streaming server" Feb 9 18:58:47.277048 env[1114]: time="2024-02-09T18:58:47.274883284Z" level=info msg="containerd successfully booted in 0.065262s" Feb 9 18:58:47.274733 systemd[1]: Started containerd.service. Feb 9 18:58:47.297708 tar[1110]: ./ptp Feb 9 18:58:47.327828 tar[1110]: ./vlan Feb 9 18:58:47.356903 tar[1110]: ./host-device Feb 9 18:58:47.385809 tar[1110]: ./tuning Feb 9 18:58:47.412827 tar[1110]: ./vrf Feb 9 18:58:47.441426 tar[1110]: ./sbr Feb 9 18:58:47.469278 tar[1110]: ./tap Feb 9 18:58:47.499755 tar[1110]: ./dhcp Feb 9 18:58:47.555619 sshd_keygen[1118]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:58:47.575757 systemd[1]: Finished sshd-keygen.service. Feb 9 18:58:47.578022 systemd[1]: Starting issuegen.service... Feb 9 18:58:47.579220 tar[1110]: ./static Feb 9 18:58:47.583188 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:58:47.583393 systemd[1]: Finished issuegen.service. Feb 9 18:58:47.585363 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:58:47.591476 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:58:47.593268 systemd[1]: Started getty@tty1.service. Feb 9 18:58:47.594853 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 18:58:47.595665 systemd[1]: Reached target getty.target. Feb 9 18:58:47.604824 tar[1110]: ./firewall Feb 9 18:58:47.636777 systemd[1]: Finished prepare-critools.service. Feb 9 18:58:47.640519 tar[1110]: ./macvlan Feb 9 18:58:47.669517 tar[1110]: ./dummy Feb 9 18:58:47.683156 systemd-networkd[1014]: eth0: Gained IPv6LL Feb 9 18:58:47.697959 tar[1110]: ./bridge Feb 9 18:58:47.729100 tar[1110]: ./ipvlan Feb 9 18:58:47.757765 tar[1110]: ./portmap Feb 9 18:58:47.784911 tar[1110]: ./host-local Feb 9 18:58:47.816658 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:58:47.817527 systemd[1]: Reached target multi-user.target. Feb 9 18:58:47.819020 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:58:47.825096 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:58:47.825204 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:58:47.825944 systemd[1]: Startup finished in 558ms (kernel) + 4.990s (initrd) + 4.648s (userspace) = 10.197s. Feb 9 18:58:48.473162 systemd[1]: Created slice system-sshd.slice. Feb 9 18:58:48.474309 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:34828.service. Feb 9 18:58:48.505405 sshd[1172]: Accepted publickey for core from 10.0.0.1 port 34828 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:58:48.506726 sshd[1172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.514135 systemd-logind[1100]: New session 1 of user core. Feb 9 18:58:48.515047 systemd[1]: Created slice user-500.slice. Feb 9 18:58:48.516143 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:58:48.525922 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:58:48.527466 systemd[1]: Starting user@500.service... Feb 9 18:58:48.530253 (systemd)[1175]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.594349 systemd[1175]: Queued start job for default target default.target. Feb 9 18:58:48.594768 systemd[1175]: Reached target paths.target. Feb 9 18:58:48.594786 systemd[1175]: Reached target sockets.target. Feb 9 18:58:48.594797 systemd[1175]: Reached target timers.target. Feb 9 18:58:48.594807 systemd[1175]: Reached target basic.target. Feb 9 18:58:48.594841 systemd[1175]: Reached target default.target. Feb 9 18:58:48.594867 systemd[1175]: Startup finished in 59ms. Feb 9 18:58:48.594930 systemd[1]: Started user@500.service. Feb 9 18:58:48.595802 systemd[1]: Started session-1.scope. Feb 9 18:58:48.647585 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:50534.service. Feb 9 18:58:48.678180 sshd[1184]: Accepted publickey for core from 10.0.0.1 port 50534 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:58:48.679321 sshd[1184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.682469 systemd-logind[1100]: New session 2 of user core. Feb 9 18:58:48.683319 systemd[1]: Started session-2.scope. Feb 9 18:58:48.735021 sshd[1184]: pam_unix(sshd:session): session closed for user core Feb 9 18:58:48.737336 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:50534.service: Deactivated successfully. Feb 9 18:58:48.737884 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:58:48.738386 systemd-logind[1100]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:58:48.739473 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:50536.service. Feb 9 18:58:48.740117 systemd-logind[1100]: Removed session 2. Feb 9 18:58:48.767239 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 50536 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:58:48.768133 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.771128 systemd-logind[1100]: New session 3 of user core. Feb 9 18:58:48.771832 systemd[1]: Started session-3.scope. Feb 9 18:58:48.818837 sshd[1190]: pam_unix(sshd:session): session closed for user core Feb 9 18:58:48.820970 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:50536.service: Deactivated successfully. Feb 9 18:58:48.821451 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:58:48.821855 systemd-logind[1100]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:58:48.822613 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:50552.service. Feb 9 18:58:48.823114 systemd-logind[1100]: Removed session 3. Feb 9 18:58:48.852368 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 50552 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:58:48.853174 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.855928 systemd-logind[1100]: New session 4 of user core. Feb 9 18:58:48.856627 systemd[1]: Started session-4.scope. Feb 9 18:58:48.907703 sshd[1196]: pam_unix(sshd:session): session closed for user core Feb 9 18:58:48.910517 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:50552.service: Deactivated successfully. Feb 9 18:58:48.911139 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:58:48.911670 systemd-logind[1100]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:58:48.912749 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:50556.service. Feb 9 18:58:48.913403 systemd-logind[1100]: Removed session 4. Feb 9 18:58:48.941242 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 50556 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc Feb 9 18:58:48.942292 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:58:48.945463 systemd-logind[1100]: New session 5 of user core. Feb 9 18:58:48.946437 systemd[1]: Started session-5.scope. Feb 9 18:58:48.998973 sudo[1206]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:58:48.999140 sudo[1206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:58:49.498879 systemd[1]: Reloading. Feb 9 18:58:49.558694 /usr/lib/systemd/system-generators/torcx-generator[1235]: time="2024-02-09T18:58:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:58:49.558720 /usr/lib/systemd/system-generators/torcx-generator[1235]: time="2024-02-09T18:58:49Z" level=info msg="torcx already run" Feb 9 18:58:49.613435 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:58:49.613452 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:58:49.631716 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:58:49.697342 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:58:50.564499 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:58:50.564927 systemd[1]: Reached target network-online.target. Feb 9 18:58:50.566151 systemd[1]: Started kubelet.service. Feb 9 18:58:50.575123 systemd[1]: Starting coreos-metadata.service... Feb 9 18:58:50.580433 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 18:58:50.580573 systemd[1]: Finished coreos-metadata.service. Feb 9 18:58:50.612775 kubelet[1276]: E0209 18:58:50.612716 1276 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:58:50.614538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:58:50.614644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:58:50.728979 systemd[1]: Stopped kubelet.service. Feb 9 18:58:50.740519 systemd[1]: Reloading. Feb 9 18:58:50.805463 /usr/lib/systemd/system-generators/torcx-generator[1345]: time="2024-02-09T18:58:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:58:50.805493 /usr/lib/systemd/system-generators/torcx-generator[1345]: time="2024-02-09T18:58:50Z" level=info msg="torcx already run" Feb 9 18:58:50.857189 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:58:50.857202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:58:50.875412 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:58:50.942650 systemd[1]: Started kubelet.service. Feb 9 18:58:50.978298 kubelet[1386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:58:50.978298 kubelet[1386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:58:50.978298 kubelet[1386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:58:50.978707 kubelet[1386]: I0209 18:58:50.978323 1386 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:58:51.351343 kubelet[1386]: I0209 18:58:51.351265 1386 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:58:51.351343 kubelet[1386]: I0209 18:58:51.351290 1386 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:58:51.351523 kubelet[1386]: I0209 18:58:51.351505 1386 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:58:51.353075 kubelet[1386]: I0209 18:58:51.353058 1386 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:58:51.356666 kubelet[1386]: I0209 18:58:51.356652 1386 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:58:51.356858 kubelet[1386]: I0209 18:58:51.356840 1386 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:58:51.356911 kubelet[1386]: I0209 18:58:51.356897 1386 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:58:51.357006 kubelet[1386]: I0209 18:58:51.356913 1386 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:58:51.357006 kubelet[1386]: I0209 18:58:51.356922 1386 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:58:51.357006 kubelet[1386]: I0209 18:58:51.356989 1386 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:58:51.360705 kubelet[1386]: I0209 18:58:51.360687 1386 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:58:51.360705 kubelet[1386]: I0209 18:58:51.360704 1386 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:58:51.360823 kubelet[1386]: I0209 18:58:51.360719 1386 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:58:51.360823 kubelet[1386]: I0209 18:58:51.360730 1386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:58:51.360823 kubelet[1386]: E0209 18:58:51.360797 1386 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:51.360892 kubelet[1386]: E0209 18:58:51.360880 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:51.361311 kubelet[1386]: I0209 18:58:51.361295 1386 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:58:51.361559 kubelet[1386]: W0209 18:58:51.361540 1386 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:58:51.361885 kubelet[1386]: I0209 18:58:51.361871 1386 server.go:1168] "Started kubelet" Feb 9 18:58:51.361956 kubelet[1386]: I0209 18:58:51.361939 1386 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:58:51.362059 kubelet[1386]: I0209 18:58:51.362047 1386 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:58:51.362644 kubelet[1386]: I0209 18:58:51.362627 1386 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:58:51.362744 kubelet[1386]: E0209 18:58:51.362719 1386 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:58:51.362799 kubelet[1386]: E0209 18:58:51.362747 1386 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:58:51.364145 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:58:51.364230 kubelet[1386]: I0209 18:58:51.364212 1386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:58:51.364496 kubelet[1386]: I0209 18:58:51.364484 1386 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:58:51.365617 kubelet[1386]: I0209 18:58:51.365601 1386 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:58:51.366005 kubelet[1386]: E0209 18:58:51.365903 1386 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.120\" not found" Feb 9 18:58:51.382451 kubelet[1386]: I0209 18:58:51.382416 1386 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:58:51.382594 kubelet[1386]: I0209 18:58:51.382465 1386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:58:51.382594 kubelet[1386]: I0209 18:58:51.382483 1386 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:58:51.382594 kubelet[1386]: E0209 18:58:51.382455 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec6fad9d3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 361851859, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 361851859, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.382747 kubelet[1386]: W0209 18:58:51.382670 1386 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:58:51.382747 kubelet[1386]: E0209 18:58:51.382686 1386 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:58:51.382821 kubelet[1386]: W0209 18:58:51.382749 1386 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:58:51.382821 kubelet[1386]: E0209 18:58:51.382758 1386 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:58:51.382821 kubelet[1386]: W0209 18:58:51.382426 1386 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.120" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:58:51.382821 kubelet[1386]: E0209 18:58:51.382773 1386 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.120" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:58:51.383134 kubelet[1386]: E0209 18:58:51.383066 1386 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.120\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 18:58:51.383508 kubelet[1386]: E0209 18:58:51.383176 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec708559b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 362735515, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 362735515, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.384165 kubelet[1386]: E0209 18:58:51.384115 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82c9153", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.120 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381887315, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381887315, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.384923 kubelet[1386]: E0209 18:58:51.384868 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82ca15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.120 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381891423, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381891423, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.385653 kubelet[1386]: I0209 18:58:51.385636 1386 policy_none.go:49] "None policy: Start" Feb 9 18:58:51.385704 kubelet[1386]: E0209 18:58:51.385640 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82caa23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.120 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381893667, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381893667, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.386245 kubelet[1386]: I0209 18:58:51.386226 1386 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:58:51.386312 kubelet[1386]: I0209 18:58:51.386295 1386 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:58:51.390869 systemd[1]: Created slice kubepods.slice. Feb 9 18:58:51.393523 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:58:51.395456 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:58:51.400796 kubelet[1386]: I0209 18:58:51.400774 1386 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:58:51.400981 kubelet[1386]: I0209 18:58:51.400963 1386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:58:51.401776 kubelet[1386]: E0209 18:58:51.401468 1386 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.120\" not found" Feb 9 18:58:51.403046 kubelet[1386]: E0209 18:58:51.402965 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec95f02e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 401970401, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 401970401, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.430696 kubelet[1386]: I0209 18:58:51.430676 1386 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:58:51.431345 kubelet[1386]: I0209 18:58:51.431327 1386 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:58:51.431345 kubelet[1386]: I0209 18:58:51.431345 1386 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:58:51.431437 kubelet[1386]: I0209 18:58:51.431360 1386 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:58:51.431437 kubelet[1386]: E0209 18:58:51.431389 1386 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:58:51.432457 kubelet[1386]: W0209 18:58:51.432443 1386 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:58:51.432526 kubelet[1386]: E0209 18:58:51.432465 1386 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:58:51.467389 kubelet[1386]: I0209 18:58:51.467376 1386 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.120" Feb 9 18:58:51.468256 kubelet[1386]: E0209 18:58:51.468235 1386 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.120" Feb 9 18:58:51.468659 kubelet[1386]: E0209 18:58:51.468605 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82c9153", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.120 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381887315, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 467344698, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82c9153" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.469298 kubelet[1386]: E0209 18:58:51.469251 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82ca15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.120 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381891423, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 467351140, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82ca15f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.469950 kubelet[1386]: E0209 18:58:51.469904 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82caa23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.120 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381893667, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 467353514, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82caa23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.584122 kubelet[1386]: E0209 18:58:51.584109 1386 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.120\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 18:58:51.669069 kubelet[1386]: I0209 18:58:51.668960 1386 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.120" Feb 9 18:58:51.670084 kubelet[1386]: E0209 18:58:51.670003 1386 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.120" Feb 9 18:58:51.671050 kubelet[1386]: E0209 18:58:51.670937 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82c9153", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.120 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381887315, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 668922636, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82c9153" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.671842 kubelet[1386]: E0209 18:58:51.671778 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82ca15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.120 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381891423, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 668932845, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82ca15f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.672838 kubelet[1386]: E0209 18:58:51.672788 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82caa23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.120 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381893667, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 668936362, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82caa23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:51.985409 kubelet[1386]: E0209 18:58:51.985298 1386 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.120\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 18:58:52.071511 kubelet[1386]: I0209 18:58:52.071466 1386 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.120" Feb 9 18:58:52.072545 kubelet[1386]: E0209 18:58:52.072519 1386 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.120" Feb 9 18:58:52.072609 kubelet[1386]: E0209 18:58:52.072523 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82c9153", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.120 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381887315, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 52, 71406483, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82c9153" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:52.073729 kubelet[1386]: E0209 18:58:52.073655 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82ca15f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.120 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381891423, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 52, 71424377, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82ca15f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:52.074382 kubelet[1386]: E0209 18:58:52.074342 1386 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.120.17b246dec82caa23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.120", UID:"10.0.0.120", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.120 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.120"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 58, 51, 381893667, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 58, 52, 71431851, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.120.17b246dec82caa23" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:58:52.353579 kubelet[1386]: I0209 18:58:52.353439 1386 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:58:52.361543 kubelet[1386]: E0209 18:58:52.361513 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:52.740419 kubelet[1386]: E0209 18:58:52.740301 1386 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.120" not found Feb 9 18:58:52.789289 kubelet[1386]: E0209 18:58:52.789237 1386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.120\" not found" node="10.0.0.120" Feb 9 18:58:52.874079 kubelet[1386]: I0209 18:58:52.874054 1386 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.120" Feb 9 18:58:52.877395 kubelet[1386]: I0209 18:58:52.877366 1386 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.120" Feb 9 18:58:52.926582 kubelet[1386]: I0209 18:58:52.926553 1386 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:58:52.927064 env[1114]: time="2024-02-09T18:58:52.926984502Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:58:52.927357 kubelet[1386]: I0209 18:58:52.927224 1386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:58:53.138589 sudo[1206]: pam_unix(sudo:session): session closed for user root Feb 9 18:58:53.139998 sshd[1203]: pam_unix(sshd:session): session closed for user core Feb 9 18:58:53.142776 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:50556.service: Deactivated successfully. Feb 9 18:58:53.143500 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:58:53.144284 systemd-logind[1100]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:58:53.144993 systemd-logind[1100]: Removed session 5. Feb 9 18:58:53.361874 kubelet[1386]: E0209 18:58:53.361848 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:53.361874 kubelet[1386]: I0209 18:58:53.361864 1386 apiserver.go:52] "Watching apiserver" Feb 9 18:58:53.364005 kubelet[1386]: I0209 18:58:53.363980 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:58:53.364065 kubelet[1386]: I0209 18:58:53.364060 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:58:53.366166 kubelet[1386]: I0209 18:58:53.366154 1386 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:58:53.368246 systemd[1]: Created slice kubepods-besteffort-pod863c82c1_92df_422c_931e_e3b6dd0c3604.slice. Feb 9 18:58:53.375061 kubelet[1386]: I0209 18:58:53.375009 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/863c82c1-92df-422c-931e-e3b6dd0c3604-kube-proxy\") pod \"kube-proxy-fv9hq\" (UID: \"863c82c1-92df-422c-931e-e3b6dd0c3604\") " pod="kube-system/kube-proxy-fv9hq" Feb 9 18:58:53.375061 kubelet[1386]: I0209 18:58:53.375054 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-lib-modules\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375212 kubelet[1386]: I0209 18:58:53.375077 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b58da30-253a-48a8-84ec-f32c04b4029a-clustermesh-secrets\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375212 kubelet[1386]: I0209 18:58:53.375110 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9vr\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-kube-api-access-ck9vr\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375212 kubelet[1386]: I0209 18:58:53.375130 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-xtables-lock\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375212 kubelet[1386]: I0209 18:58:53.375147 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-net\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375212 kubelet[1386]: I0209 18:58:53.375163 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-kernel\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375381 kubelet[1386]: I0209 18:58:53.375178 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-hubble-tls\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375381 kubelet[1386]: I0209 18:58:53.375193 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863c82c1-92df-422c-931e-e3b6dd0c3604-xtables-lock\") pod \"kube-proxy-fv9hq\" (UID: \"863c82c1-92df-422c-931e-e3b6dd0c3604\") " pod="kube-system/kube-proxy-fv9hq" Feb 9 18:58:53.375497 kubelet[1386]: I0209 18:58:53.375478 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863c82c1-92df-422c-931e-e3b6dd0c3604-lib-modules\") pod \"kube-proxy-fv9hq\" (UID: \"863c82c1-92df-422c-931e-e3b6dd0c3604\") " pod="kube-system/kube-proxy-fv9hq" Feb 9 18:58:53.375540 kubelet[1386]: I0209 18:58:53.375509 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-bpf-maps\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375540 kubelet[1386]: I0209 18:58:53.375533 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-hostproc\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375595 kubelet[1386]: I0209 18:58:53.375569 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9fc\" (UniqueName: \"kubernetes.io/projected/863c82c1-92df-422c-931e-e3b6dd0c3604-kube-api-access-hd9fc\") pod \"kube-proxy-fv9hq\" (UID: \"863c82c1-92df-422c-931e-e3b6dd0c3604\") " pod="kube-system/kube-proxy-fv9hq" Feb 9 18:58:53.375627 kubelet[1386]: I0209 18:58:53.375601 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-etc-cni-netd\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375627 kubelet[1386]: I0209 18:58:53.375617 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-config-path\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375685 kubelet[1386]: I0209 18:58:53.375643 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-run\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375685 kubelet[1386]: I0209 18:58:53.375658 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-cgroup\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375685 kubelet[1386]: I0209 18:58:53.375676 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cni-path\") pod \"cilium-28j6l\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " pod="kube-system/cilium-28j6l" Feb 9 18:58:53.375773 kubelet[1386]: I0209 18:58:53.375687 1386 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:58:53.376018 systemd[1]: Created slice kubepods-burstable-pod1b58da30_253a_48a8_84ec_f32c04b4029a.slice. Feb 9 18:58:53.674077 kubelet[1386]: E0209 18:58:53.674025 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:58:53.674800 env[1114]: time="2024-02-09T18:58:53.674760151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fv9hq,Uid:863c82c1-92df-422c-931e-e3b6dd0c3604,Namespace:kube-system,Attempt:0,}" Feb 9 18:58:53.684290 kubelet[1386]: E0209 18:58:53.684253 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:58:53.684726 env[1114]: time="2024-02-09T18:58:53.684690450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28j6l,Uid:1b58da30-253a-48a8-84ec-f32c04b4029a,Namespace:kube-system,Attempt:0,}" Feb 9 18:58:54.362134 kubelet[1386]: E0209 18:58:54.362089 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:54.616005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516821922.mount: Deactivated successfully. Feb 9 18:58:54.622954 env[1114]: time="2024-02-09T18:58:54.622909741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.623743 env[1114]: time="2024-02-09T18:58:54.623717776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.626147 env[1114]: time="2024-02-09T18:58:54.626102418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.627066 env[1114]: time="2024-02-09T18:58:54.627027793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.628448 env[1114]: time="2024-02-09T18:58:54.628420213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.630560 env[1114]: time="2024-02-09T18:58:54.630541831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.631784 env[1114]: time="2024-02-09T18:58:54.631763061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.633066 env[1114]: time="2024-02-09T18:58:54.633021391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:54.650378 env[1114]: time="2024-02-09T18:58:54.650323542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:54.650511 env[1114]: time="2024-02-09T18:58:54.650424782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:54.650511 env[1114]: time="2024-02-09T18:58:54.650447645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:54.650740 env[1114]: time="2024-02-09T18:58:54.650689388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197 pid=1447 runtime=io.containerd.runc.v2 Feb 9 18:58:54.653064 env[1114]: time="2024-02-09T18:58:54.652988930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:54.653118 env[1114]: time="2024-02-09T18:58:54.653070302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:54.653118 env[1114]: time="2024-02-09T18:58:54.653084509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:54.653273 env[1114]: time="2024-02-09T18:58:54.653227677Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09388ac9d17306fd245ee092526c4079dc13eb1f30000f7c64ed41595ce539a9 pid=1457 runtime=io.containerd.runc.v2 Feb 9 18:58:54.664972 systemd[1]: Started cri-containerd-09388ac9d17306fd245ee092526c4079dc13eb1f30000f7c64ed41595ce539a9.scope. Feb 9 18:58:54.671278 systemd[1]: Started cri-containerd-301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197.scope. Feb 9 18:58:54.688253 env[1114]: time="2024-02-09T18:58:54.688116253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fv9hq,Uid:863c82c1-92df-422c-931e-e3b6dd0c3604,Namespace:kube-system,Attempt:0,} returns sandbox id \"09388ac9d17306fd245ee092526c4079dc13eb1f30000f7c64ed41595ce539a9\"" Feb 9 18:58:54.688967 kubelet[1386]: E0209 18:58:54.688941 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:58:54.691059 env[1114]: time="2024-02-09T18:58:54.690050821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 18:58:54.691620 env[1114]: time="2024-02-09T18:58:54.691586520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28j6l,Uid:1b58da30-253a-48a8-84ec-f32c04b4029a,Namespace:kube-system,Attempt:0,} returns sandbox id \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\"" Feb 9 18:58:54.692010 kubelet[1386]: E0209 18:58:54.691987 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:58:55.362757 kubelet[1386]: E0209 18:58:55.362693 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:55.816011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810859554.mount: Deactivated successfully. Feb 9 18:58:56.363625 kubelet[1386]: E0209 18:58:56.363578 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:57.102354 env[1114]: time="2024-02-09T18:58:57.102310407Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:57.206587 env[1114]: time="2024-02-09T18:58:57.206539316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:57.338026 env[1114]: time="2024-02-09T18:58:57.337972333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:57.364227 kubelet[1386]: E0209 18:58:57.364155 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:57.412809 env[1114]: time="2024-02-09T18:58:57.412782672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:57.413178 env[1114]: time="2024-02-09T18:58:57.413161132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 18:58:57.413749 env[1114]: time="2024-02-09T18:58:57.413726682Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:58:57.414579 env[1114]: time="2024-02-09T18:58:57.414555466Z" level=info msg="CreateContainer within sandbox \"09388ac9d17306fd245ee092526c4079dc13eb1f30000f7c64ed41595ce539a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:58:58.273850 env[1114]: time="2024-02-09T18:58:58.273786184Z" level=info msg="CreateContainer within sandbox \"09388ac9d17306fd245ee092526c4079dc13eb1f30000f7c64ed41595ce539a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be35dc60ef31f535bd88ccb8cded8855a471c7bb2570f2acb79101d151a10934\"" Feb 9 18:58:58.274460 env[1114]: time="2024-02-09T18:58:58.274436944Z" level=info msg="StartContainer for \"be35dc60ef31f535bd88ccb8cded8855a471c7bb2570f2acb79101d151a10934\"" Feb 9 18:58:58.292370 systemd[1]: Started cri-containerd-be35dc60ef31f535bd88ccb8cded8855a471c7bb2570f2acb79101d151a10934.scope. Feb 9 18:58:58.318328 env[1114]: time="2024-02-09T18:58:58.318269780Z" level=info msg="StartContainer for \"be35dc60ef31f535bd88ccb8cded8855a471c7bb2570f2acb79101d151a10934\" returns successfully" Feb 9 18:58:58.364760 kubelet[1386]: E0209 18:58:58.364712 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:58.442049 kubelet[1386]: E0209 18:58:58.442012 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:58:58.448437 kubelet[1386]: I0209 18:58:58.448422 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fv9hq" podStartSLOduration=3.724676043 podCreationTimestamp="2024-02-09 18:58:52 +0000 UTC" firstStartedPulling="2024-02-09 18:58:54.689711675 +0000 UTC m=+3.744698500" lastFinishedPulling="2024-02-09 18:58:57.413428784 +0000 UTC m=+6.468415609" observedRunningTime="2024-02-09 18:58:58.448298474 +0000 UTC m=+7.503285299" watchObservedRunningTime="2024-02-09 18:58:58.448393152 +0000 UTC m=+7.503380007" Feb 9 18:58:59.256804 systemd[1]: run-containerd-runc-k8s.io-be35dc60ef31f535bd88ccb8cded8855a471c7bb2570f2acb79101d151a10934-runc.Kracfe.mount: Deactivated successfully. Feb 9 18:58:59.365157 kubelet[1386]: E0209 18:58:59.365101 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:59.443263 kubelet[1386]: E0209 18:58:59.443234 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:00.365769 kubelet[1386]: E0209 18:59:00.365722 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:01.366604 kubelet[1386]: E0209 18:59:01.366575 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:02.367295 kubelet[1386]: E0209 18:59:02.367229 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:03.368069 kubelet[1386]: E0209 18:59:03.368025 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:04.010958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229559482.mount: Deactivated successfully. Feb 9 18:59:04.369033 kubelet[1386]: E0209 18:59:04.368928 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:05.369426 kubelet[1386]: E0209 18:59:05.369376 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:06.370406 kubelet[1386]: E0209 18:59:06.370370 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:07.370517 kubelet[1386]: E0209 18:59:07.370481 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:07.860104 env[1114]: time="2024-02-09T18:59:07.860055346Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:07.861739 env[1114]: time="2024-02-09T18:59:07.861694189Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:07.863330 env[1114]: time="2024-02-09T18:59:07.863293357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:07.863800 env[1114]: time="2024-02-09T18:59:07.863777365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 18:59:07.865157 env[1114]: time="2024-02-09T18:59:07.865128007Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:59:07.876100 env[1114]: time="2024-02-09T18:59:07.876065094Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\"" Feb 9 18:59:07.876494 env[1114]: time="2024-02-09T18:59:07.876449264Z" level=info msg="StartContainer for \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\"" Feb 9 18:59:07.890007 systemd[1]: Started cri-containerd-667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc.scope. Feb 9 18:59:07.911910 env[1114]: time="2024-02-09T18:59:07.911853467Z" level=info msg="StartContainer for \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\" returns successfully" Feb 9 18:59:07.919558 systemd[1]: cri-containerd-667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc.scope: Deactivated successfully. Feb 9 18:59:08.245421 env[1114]: time="2024-02-09T18:59:08.245355163Z" level=info msg="shim disconnected" id=667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc Feb 9 18:59:08.245421 env[1114]: time="2024-02-09T18:59:08.245423802Z" level=warning msg="cleaning up after shim disconnected" id=667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc namespace=k8s.io Feb 9 18:59:08.245421 env[1114]: time="2024-02-09T18:59:08.245438820Z" level=info msg="cleaning up dead shim" Feb 9 18:59:08.251589 env[1114]: time="2024-02-09T18:59:08.251546822Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1724 runtime=io.containerd.runc.v2\n" Feb 9 18:59:08.371521 kubelet[1386]: E0209 18:59:08.371496 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:08.453386 kubelet[1386]: E0209 18:59:08.453357 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:08.454721 env[1114]: time="2024-02-09T18:59:08.454676530Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:59:08.467848 env[1114]: time="2024-02-09T18:59:08.467809104Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\"" Feb 9 18:59:08.468308 env[1114]: time="2024-02-09T18:59:08.468258216Z" level=info msg="StartContainer for \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\"" Feb 9 18:59:08.480751 systemd[1]: Started cri-containerd-e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146.scope. Feb 9 18:59:08.499549 env[1114]: time="2024-02-09T18:59:08.499446083Z" level=info msg="StartContainer for \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\" returns successfully" Feb 9 18:59:08.508072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:59:08.508299 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:59:08.508468 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:59:08.510025 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:59:08.510291 systemd[1]: cri-containerd-e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146.scope: Deactivated successfully. Feb 9 18:59:08.516613 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:59:08.528269 env[1114]: time="2024-02-09T18:59:08.528229372Z" level=info msg="shim disconnected" id=e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146 Feb 9 18:59:08.528439 env[1114]: time="2024-02-09T18:59:08.528272332Z" level=warning msg="cleaning up after shim disconnected" id=e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146 namespace=k8s.io Feb 9 18:59:08.528439 env[1114]: time="2024-02-09T18:59:08.528283053Z" level=info msg="cleaning up dead shim" Feb 9 18:59:08.534646 env[1114]: time="2024-02-09T18:59:08.534605587Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1786 runtime=io.containerd.runc.v2\n" Feb 9 18:59:08.871589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc-rootfs.mount: Deactivated successfully. Feb 9 18:59:09.371714 kubelet[1386]: E0209 18:59:09.371666 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:09.455864 kubelet[1386]: E0209 18:59:09.455841 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:09.457272 env[1114]: time="2024-02-09T18:59:09.457232222Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:59:09.470531 env[1114]: time="2024-02-09T18:59:09.470477988Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\"" Feb 9 18:59:09.470856 env[1114]: time="2024-02-09T18:59:09.470835799Z" level=info msg="StartContainer for \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\"" Feb 9 18:59:09.485481 systemd[1]: Started cri-containerd-c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd.scope. Feb 9 18:59:09.510627 env[1114]: time="2024-02-09T18:59:09.510574539Z" level=info msg="StartContainer for \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\" returns successfully" Feb 9 18:59:09.510725 systemd[1]: cri-containerd-c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd.scope: Deactivated successfully. Feb 9 18:59:09.529940 env[1114]: time="2024-02-09T18:59:09.529873965Z" level=info msg="shim disconnected" id=c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd Feb 9 18:59:09.529940 env[1114]: time="2024-02-09T18:59:09.529919721Z" level=warning msg="cleaning up after shim disconnected" id=c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd namespace=k8s.io Feb 9 18:59:09.529940 env[1114]: time="2024-02-09T18:59:09.529928808Z" level=info msg="cleaning up dead shim" Feb 9 18:59:09.537063 env[1114]: time="2024-02-09T18:59:09.537014353Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1842 runtime=io.containerd.runc.v2\n" Feb 9 18:59:09.871614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd-rootfs.mount: Deactivated successfully. Feb 9 18:59:10.372337 kubelet[1386]: E0209 18:59:10.372283 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:10.458231 kubelet[1386]: E0209 18:59:10.458196 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:10.459848 env[1114]: time="2024-02-09T18:59:10.459808933Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:59:10.472750 env[1114]: time="2024-02-09T18:59:10.472703500Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\"" Feb 9 18:59:10.473259 env[1114]: time="2024-02-09T18:59:10.473232081Z" level=info msg="StartContainer for \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\"" Feb 9 18:59:10.488949 systemd[1]: Started cri-containerd-76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a.scope. Feb 9 18:59:10.508741 systemd[1]: cri-containerd-76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a.scope: Deactivated successfully. Feb 9 18:59:10.509581 env[1114]: time="2024-02-09T18:59:10.509540670Z" level=info msg="StartContainer for \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\" returns successfully" Feb 9 18:59:10.527877 env[1114]: time="2024-02-09T18:59:10.527831044Z" level=info msg="shim disconnected" id=76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a Feb 9 18:59:10.527877 env[1114]: time="2024-02-09T18:59:10.527872993Z" level=warning msg="cleaning up after shim disconnected" id=76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a namespace=k8s.io Feb 9 18:59:10.527877 env[1114]: time="2024-02-09T18:59:10.527881469Z" level=info msg="cleaning up dead shim" Feb 9 18:59:10.533590 env[1114]: time="2024-02-09T18:59:10.533560928Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1895 runtime=io.containerd.runc.v2\n" Feb 9 18:59:10.871317 systemd[1]: run-containerd-runc-k8s.io-76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a-runc.nRRfnT.mount: Deactivated successfully. Feb 9 18:59:10.871402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a-rootfs.mount: Deactivated successfully. Feb 9 18:59:11.361368 kubelet[1386]: E0209 18:59:11.361292 1386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:11.372576 kubelet[1386]: E0209 18:59:11.372546 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:11.462359 kubelet[1386]: E0209 18:59:11.462317 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:11.464124 env[1114]: time="2024-02-09T18:59:11.464091843Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:59:11.478107 env[1114]: time="2024-02-09T18:59:11.478060775Z" level=info msg="CreateContainer within sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\"" Feb 9 18:59:11.478681 env[1114]: time="2024-02-09T18:59:11.478641855Z" level=info msg="StartContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\"" Feb 9 18:59:11.493211 systemd[1]: Started cri-containerd-c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb.scope. Feb 9 18:59:11.517398 env[1114]: time="2024-02-09T18:59:11.517325276Z" level=info msg="StartContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" returns successfully" Feb 9 18:59:11.597503 kubelet[1386]: I0209 18:59:11.597447 1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:59:11.793145 kernel: Initializing XFRM netlink socket Feb 9 18:59:11.871519 systemd[1]: run-containerd-runc-k8s.io-c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb-runc.vqTYuV.mount: Deactivated successfully. Feb 9 18:59:12.373173 kubelet[1386]: E0209 18:59:12.373130 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:12.466329 kubelet[1386]: E0209 18:59:12.466303 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:12.477867 kubelet[1386]: I0209 18:59:12.477831 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-28j6l" podStartSLOduration=7.306149445 podCreationTimestamp="2024-02-09 18:58:52 +0000 UTC" firstStartedPulling="2024-02-09 18:58:54.692311019 +0000 UTC m=+3.747297844" lastFinishedPulling="2024-02-09 18:59:07.863948115 +0000 UTC m=+16.918934950" observedRunningTime="2024-02-09 18:59:12.477765701 +0000 UTC m=+21.532752537" watchObservedRunningTime="2024-02-09 18:59:12.477786551 +0000 UTC m=+21.532773376" Feb 9 18:59:13.373623 kubelet[1386]: E0209 18:59:13.373566 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:13.397770 systemd-networkd[1014]: cilium_host: Link UP Feb 9 18:59:13.397913 systemd-networkd[1014]: cilium_net: Link UP Feb 9 18:59:13.399016 systemd-networkd[1014]: cilium_net: Gained carrier Feb 9 18:59:13.399622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:59:13.399673 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:59:13.399778 systemd-networkd[1014]: cilium_host: Gained carrier Feb 9 18:59:13.468167 kubelet[1386]: E0209 18:59:13.468139 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:13.470253 systemd-networkd[1014]: cilium_vxlan: Link UP Feb 9 18:59:13.470260 systemd-networkd[1014]: cilium_vxlan: Gained carrier Feb 9 18:59:13.648067 kernel: NET: Registered PF_ALG protocol family Feb 9 18:59:13.731480 systemd-networkd[1014]: cilium_host: Gained IPv6LL Feb 9 18:59:14.140973 systemd-networkd[1014]: lxc_health: Link UP Feb 9 18:59:14.148440 systemd-networkd[1014]: lxc_health: Gained carrier Feb 9 18:59:14.149068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:59:14.180125 systemd-networkd[1014]: cilium_net: Gained IPv6LL Feb 9 18:59:14.374288 kubelet[1386]: E0209 18:59:14.374237 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:14.469729 kubelet[1386]: E0209 18:59:14.469629 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:14.499348 systemd-networkd[1014]: cilium_vxlan: Gained IPv6LL Feb 9 18:59:14.746769 kubelet[1386]: I0209 18:59:14.746715 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:14.752018 systemd[1]: Created slice kubepods-besteffort-pod4bf691e8_e63d_40bb_8b5a_c77cce6715a4.slice. Feb 9 18:59:14.781027 kubelet[1386]: I0209 18:59:14.780990 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthss\" (UniqueName: \"kubernetes.io/projected/4bf691e8-e63d-40bb-8b5a-c77cce6715a4-kube-api-access-vthss\") pod \"nginx-deployment-845c78c8b9-5b9mq\" (UID: \"4bf691e8-e63d-40bb-8b5a-c77cce6715a4\") " pod="default/nginx-deployment-845c78c8b9-5b9mq" Feb 9 18:59:15.054881 env[1114]: time="2024-02-09T18:59:15.054763963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-5b9mq,Uid:4bf691e8-e63d-40bb-8b5a-c77cce6715a4,Namespace:default,Attempt:0,}" Feb 9 18:59:15.375245 kubelet[1386]: E0209 18:59:15.375124 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:15.616806 systemd-networkd[1014]: lxca968938daa62: Link UP Feb 9 18:59:15.633110 kernel: eth0: renamed from tmp6558a Feb 9 18:59:15.639478 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:59:15.639602 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca968938daa62: link becomes ready Feb 9 18:59:15.639572 systemd-networkd[1014]: lxca968938daa62: Gained carrier Feb 9 18:59:15.686183 kubelet[1386]: E0209 18:59:15.686145 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:16.035258 systemd-networkd[1014]: lxc_health: Gained IPv6LL Feb 9 18:59:16.375395 kubelet[1386]: E0209 18:59:16.375278 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:16.472579 kubelet[1386]: E0209 18:59:16.472551 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:16.995260 systemd-networkd[1014]: lxca968938daa62: Gained IPv6LL Feb 9 18:59:17.376289 kubelet[1386]: E0209 18:59:17.376176 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:17.474454 kubelet[1386]: E0209 18:59:17.474419 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:18.342121 env[1114]: time="2024-02-09T18:59:18.342032731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:18.342121 env[1114]: time="2024-02-09T18:59:18.342079729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:18.342121 env[1114]: time="2024-02-09T18:59:18.342090289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:18.342505 env[1114]: time="2024-02-09T18:59:18.342240691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6558aef8a46f7b97f8f15cd0bd88daf9336726e5e89722aa379ca14ad939c98c pid=2440 runtime=io.containerd.runc.v2 Feb 9 18:59:18.354230 systemd[1]: Started cri-containerd-6558aef8a46f7b97f8f15cd0bd88daf9336726e5e89722aa379ca14ad939c98c.scope. Feb 9 18:59:18.362503 systemd-resolved[1061]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:59:18.377477 kubelet[1386]: E0209 18:59:18.377446 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:18.382191 env[1114]: time="2024-02-09T18:59:18.382154369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-5b9mq,Uid:4bf691e8-e63d-40bb-8b5a-c77cce6715a4,Namespace:default,Attempt:0,} returns sandbox id \"6558aef8a46f7b97f8f15cd0bd88daf9336726e5e89722aa379ca14ad939c98c\"" Feb 9 18:59:18.383270 env[1114]: time="2024-02-09T18:59:18.383250494Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:59:19.377960 kubelet[1386]: E0209 18:59:19.377883 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:20.378229 kubelet[1386]: E0209 18:59:20.378189 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:21.379320 kubelet[1386]: E0209 18:59:21.379274 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:21.679476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685947237.mount: Deactivated successfully. Feb 9 18:59:22.380341 kubelet[1386]: E0209 18:59:22.380291 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:22.505257 env[1114]: time="2024-02-09T18:59:22.505216657Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:22.506833 env[1114]: time="2024-02-09T18:59:22.506791573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:22.508247 env[1114]: time="2024-02-09T18:59:22.508218074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:22.509649 env[1114]: time="2024-02-09T18:59:22.509594169Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:22.510232 env[1114]: time="2024-02-09T18:59:22.510191704Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:59:22.511425 env[1114]: time="2024-02-09T18:59:22.511400097Z" level=info msg="CreateContainer within sandbox \"6558aef8a46f7b97f8f15cd0bd88daf9336726e5e89722aa379ca14ad939c98c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:59:22.522447 env[1114]: time="2024-02-09T18:59:22.522394652Z" level=info msg="CreateContainer within sandbox \"6558aef8a46f7b97f8f15cd0bd88daf9336726e5e89722aa379ca14ad939c98c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c877314fb931c5037d8e174ba7e7436bc0ba464421f136e2d1f0b36f9c4bd35e\"" Feb 9 18:59:22.522858 env[1114]: time="2024-02-09T18:59:22.522829536Z" level=info msg="StartContainer for \"c877314fb931c5037d8e174ba7e7436bc0ba464421f136e2d1f0b36f9c4bd35e\"" Feb 9 18:59:22.536205 systemd[1]: Started cri-containerd-c877314fb931c5037d8e174ba7e7436bc0ba464421f136e2d1f0b36f9c4bd35e.scope. Feb 9 18:59:22.557364 env[1114]: time="2024-02-09T18:59:22.557096106Z" level=info msg="StartContainer for \"c877314fb931c5037d8e174ba7e7436bc0ba464421f136e2d1f0b36f9c4bd35e\" returns successfully" Feb 9 18:59:23.380701 kubelet[1386]: E0209 18:59:23.380646 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:23.491870 kubelet[1386]: I0209 18:59:23.491835 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-5b9mq" podStartSLOduration=5.364418176 podCreationTimestamp="2024-02-09 18:59:14 +0000 UTC" firstStartedPulling="2024-02-09 18:59:18.383022406 +0000 UTC m=+27.438009231" lastFinishedPulling="2024-02-09 18:59:22.510405674 +0000 UTC m=+31.565392499" observedRunningTime="2024-02-09 18:59:23.491658702 +0000 UTC m=+32.546645527" watchObservedRunningTime="2024-02-09 18:59:23.491801444 +0000 UTC m=+32.546788269" Feb 9 18:59:24.380976 kubelet[1386]: E0209 18:59:24.380917 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:25.381545 kubelet[1386]: E0209 18:59:25.381489 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:26.232975 kubelet[1386]: I0209 18:59:26.232939 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:26.237796 systemd[1]: Created slice kubepods-besteffort-pod74648680_750e_4774_9000_06b464e62e20.slice. Feb 9 18:59:26.336956 kubelet[1386]: I0209 18:59:26.336894 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshgx\" (UniqueName: \"kubernetes.io/projected/74648680-750e-4774-9000-06b464e62e20-kube-api-access-jshgx\") pod \"nfs-server-provisioner-0\" (UID: \"74648680-750e-4774-9000-06b464e62e20\") " pod="default/nfs-server-provisioner-0" Feb 9 18:59:26.336956 kubelet[1386]: I0209 18:59:26.336958 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/74648680-750e-4774-9000-06b464e62e20-data\") pod \"nfs-server-provisioner-0\" (UID: \"74648680-750e-4774-9000-06b464e62e20\") " pod="default/nfs-server-provisioner-0" Feb 9 18:59:26.382336 kubelet[1386]: E0209 18:59:26.382271 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:26.540786 env[1114]: time="2024-02-09T18:59:26.540684818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:74648680-750e-4774-9000-06b464e62e20,Namespace:default,Attempt:0,}" Feb 9 18:59:26.940097 systemd-networkd[1014]: lxc67d6973fce5b: Link UP Feb 9 18:59:26.946075 kernel: eth0: renamed from tmp6d277 Feb 9 18:59:26.950851 systemd-networkd[1014]: lxc67d6973fce5b: Gained carrier Feb 9 18:59:26.951089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:59:26.951121 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc67d6973fce5b: link becomes ready Feb 9 18:59:27.128857 env[1114]: time="2024-02-09T18:59:27.128785612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:27.128857 env[1114]: time="2024-02-09T18:59:27.128835025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:27.128857 env[1114]: time="2024-02-09T18:59:27.128849153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:27.129094 env[1114]: time="2024-02-09T18:59:27.129044314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d27793d6ea3345227e6d0022a6eecfa98a666a1c967c1aae0d640ad4cfb5d10 pid=2568 runtime=io.containerd.runc.v2 Feb 9 18:59:27.138367 systemd[1]: Started cri-containerd-6d27793d6ea3345227e6d0022a6eecfa98a666a1c967c1aae0d640ad4cfb5d10.scope. Feb 9 18:59:27.147810 systemd-resolved[1061]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:59:27.167223 env[1114]: time="2024-02-09T18:59:27.167183970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:74648680-750e-4774-9000-06b464e62e20,Namespace:default,Attempt:0,} returns sandbox id \"6d27793d6ea3345227e6d0022a6eecfa98a666a1c967c1aae0d640ad4cfb5d10\"" Feb 9 18:59:27.168619 env[1114]: time="2024-02-09T18:59:27.168599856Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:59:27.382749 kubelet[1386]: E0209 18:59:27.382704 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:28.383235 kubelet[1386]: E0209 18:59:28.383184 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:28.387300 systemd-networkd[1014]: lxc67d6973fce5b: Gained IPv6LL Feb 9 18:59:29.383633 kubelet[1386]: E0209 18:59:29.383589 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:29.485999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886086134.mount: Deactivated successfully. Feb 9 18:59:30.383892 kubelet[1386]: E0209 18:59:30.383851 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:31.361391 kubelet[1386]: E0209 18:59:31.361334 1386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:31.384552 kubelet[1386]: E0209 18:59:31.384486 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:32.384755 kubelet[1386]: E0209 18:59:32.384720 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:32.452582 env[1114]: time="2024-02-09T18:59:32.452538241Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:32.454718 env[1114]: time="2024-02-09T18:59:32.454692035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:32.456391 env[1114]: time="2024-02-09T18:59:32.456347504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:32.457832 env[1114]: time="2024-02-09T18:59:32.457788185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:32.458409 env[1114]: time="2024-02-09T18:59:32.458378615Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 18:59:32.459933 env[1114]: time="2024-02-09T18:59:32.459894359Z" level=info msg="CreateContainer within sandbox \"6d27793d6ea3345227e6d0022a6eecfa98a666a1c967c1aae0d640ad4cfb5d10\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:59:32.470421 env[1114]: time="2024-02-09T18:59:32.470390569Z" level=info msg="CreateContainer within sandbox \"6d27793d6ea3345227e6d0022a6eecfa98a666a1c967c1aae0d640ad4cfb5d10\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a84311efc4e81f82b234297018b6fc6f1fab74a9ef52c7799f31ec7fa21ea513\"" Feb 9 18:59:32.470771 env[1114]: time="2024-02-09T18:59:32.470750171Z" level=info msg="StartContainer for \"a84311efc4e81f82b234297018b6fc6f1fab74a9ef52c7799f31ec7fa21ea513\"" Feb 9 18:59:32.482615 systemd[1]: Started cri-containerd-a84311efc4e81f82b234297018b6fc6f1fab74a9ef52c7799f31ec7fa21ea513.scope. Feb 9 18:59:32.511934 env[1114]: time="2024-02-09T18:59:32.511894650Z" level=info msg="StartContainer for \"a84311efc4e81f82b234297018b6fc6f1fab74a9ef52c7799f31ec7fa21ea513\" returns successfully" Feb 9 18:59:32.628688 update_engine[1103]: I0209 18:59:32.628633 1103 update_attempter.cc:509] Updating boot flags... Feb 9 18:59:33.385122 kubelet[1386]: E0209 18:59:33.385082 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:33.522837 kubelet[1386]: I0209 18:59:33.522807 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.232439326 podCreationTimestamp="2024-02-09 18:59:26 +0000 UTC" firstStartedPulling="2024-02-09 18:59:27.168247455 +0000 UTC m=+36.223234280" lastFinishedPulling="2024-02-09 18:59:32.458581028 +0000 UTC m=+41.513567853" observedRunningTime="2024-02-09 18:59:33.522421133 +0000 UTC m=+42.577407958" watchObservedRunningTime="2024-02-09 18:59:33.522772899 +0000 UTC m=+42.577759714" Feb 9 18:59:34.386241 kubelet[1386]: E0209 18:59:34.386196 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:35.386814 kubelet[1386]: E0209 18:59:35.386768 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:36.387905 kubelet[1386]: E0209 18:59:36.387852 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:37.388173 kubelet[1386]: E0209 18:59:37.388131 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:38.388888 kubelet[1386]: E0209 18:59:38.388839 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:39.389356 kubelet[1386]: E0209 18:59:39.389295 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:40.390386 kubelet[1386]: E0209 18:59:40.390332 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:41.391072 kubelet[1386]: E0209 18:59:41.390995 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:42.160079 kubelet[1386]: I0209 18:59:42.160030 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:42.164193 systemd[1]: Created slice kubepods-besteffort-pod02ff0605_a96d_45b0_8dc9_534dab010352.slice. Feb 9 18:59:42.216869 kubelet[1386]: I0209 18:59:42.216841 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0de5659e-e125-40b7-8b45-9ef4073be75a\" (UniqueName: \"kubernetes.io/nfs/02ff0605-a96d-45b0-8dc9-534dab010352-pvc-0de5659e-e125-40b7-8b45-9ef4073be75a\") pod \"test-pod-1\" (UID: \"02ff0605-a96d-45b0-8dc9-534dab010352\") " pod="default/test-pod-1" Feb 9 18:59:42.216869 kubelet[1386]: I0209 18:59:42.216872 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmcb\" (UniqueName: \"kubernetes.io/projected/02ff0605-a96d-45b0-8dc9-534dab010352-kube-api-access-hfmcb\") pod \"test-pod-1\" (UID: \"02ff0605-a96d-45b0-8dc9-534dab010352\") " pod="default/test-pod-1" Feb 9 18:59:42.337062 kernel: FS-Cache: Loaded Feb 9 18:59:42.370455 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:59:42.370500 kernel: RPC: Registered udp transport module. Feb 9 18:59:42.370516 kernel: RPC: Registered tcp transport module. Feb 9 18:59:42.371585 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:59:42.391789 kubelet[1386]: E0209 18:59:42.391744 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:42.410066 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:59:42.581466 kernel: NFS: Registering the id_resolver key type Feb 9 18:59:42.581613 kernel: Key type id_resolver registered Feb 9 18:59:42.581636 kernel: Key type id_legacy registered Feb 9 18:59:42.601372 nfsidmap[2702]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:59:42.604186 nfsidmap[2705]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:59:42.767261 env[1114]: time="2024-02-09T18:59:42.767213969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02ff0605-a96d-45b0-8dc9-534dab010352,Namespace:default,Attempt:0,}" Feb 9 18:59:42.789570 systemd-networkd[1014]: lxc0fe71187068c: Link UP Feb 9 18:59:42.796059 kernel: eth0: renamed from tmpb4fd1 Feb 9 18:59:42.804162 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:59:42.804214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0fe71187068c: link becomes ready Feb 9 18:59:42.804016 systemd-networkd[1014]: lxc0fe71187068c: Gained carrier Feb 9 18:59:43.038623 env[1114]: time="2024-02-09T18:59:43.038553360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:43.038981 env[1114]: time="2024-02-09T18:59:43.038596081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:43.038981 env[1114]: time="2024-02-09T18:59:43.038606902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:43.039161 env[1114]: time="2024-02-09T18:59:43.039048794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4fd1507845e3c94b856edddddec4a593099db084d0385ba8f3394a22805990e pid=2740 runtime=io.containerd.runc.v2 Feb 9 18:59:43.048916 systemd[1]: Started cri-containerd-b4fd1507845e3c94b856edddddec4a593099db084d0385ba8f3394a22805990e.scope. Feb 9 18:59:43.062772 systemd-resolved[1061]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:59:43.088132 env[1114]: time="2024-02-09T18:59:43.088081747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02ff0605-a96d-45b0-8dc9-534dab010352,Namespace:default,Attempt:0,} returns sandbox id \"b4fd1507845e3c94b856edddddec4a593099db084d0385ba8f3394a22805990e\"" Feb 9 18:59:43.089634 env[1114]: time="2024-02-09T18:59:43.089587627Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:59:43.392710 kubelet[1386]: E0209 18:59:43.392599 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:43.531050 env[1114]: time="2024-02-09T18:59:43.530989807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:43.532517 env[1114]: time="2024-02-09T18:59:43.532466651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:43.533884 env[1114]: time="2024-02-09T18:59:43.533844058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:43.535433 env[1114]: time="2024-02-09T18:59:43.535407646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:43.536692 env[1114]: time="2024-02-09T18:59:43.536662963Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:59:43.537890 env[1114]: time="2024-02-09T18:59:43.537862785Z" level=info msg="CreateContainer within sandbox \"b4fd1507845e3c94b856edddddec4a593099db084d0385ba8f3394a22805990e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:59:43.551061 env[1114]: time="2024-02-09T18:59:43.551016765Z" level=info msg="CreateContainer within sandbox \"b4fd1507845e3c94b856edddddec4a593099db084d0385ba8f3394a22805990e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"31b60e32774a829712b6b2d9cfc9eddce94da868de8dd275f76feb440162d6e7\"" Feb 9 18:59:43.551387 env[1114]: time="2024-02-09T18:59:43.551362156Z" level=info msg="StartContainer for \"31b60e32774a829712b6b2d9cfc9eddce94da868de8dd275f76feb440162d6e7\"" Feb 9 18:59:43.567131 systemd[1]: Started cri-containerd-31b60e32774a829712b6b2d9cfc9eddce94da868de8dd275f76feb440162d6e7.scope. Feb 9 18:59:43.588556 env[1114]: time="2024-02-09T18:59:43.588514841Z" level=info msg="StartContainer for \"31b60e32774a829712b6b2d9cfc9eddce94da868de8dd275f76feb440162d6e7\" returns successfully" Feb 9 18:59:43.939255 systemd-networkd[1014]: lxc0fe71187068c: Gained IPv6LL Feb 9 18:59:44.393298 kubelet[1386]: E0209 18:59:44.393254 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:44.543481 kubelet[1386]: I0209 18:59:44.543447 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.095837697 podCreationTimestamp="2024-02-09 18:59:26 +0000 UTC" firstStartedPulling="2024-02-09 18:59:43.089205025 +0000 UTC m=+52.144191850" lastFinishedPulling="2024-02-09 18:59:43.536786596 +0000 UTC m=+52.591773431" observedRunningTime="2024-02-09 18:59:44.543023672 +0000 UTC m=+53.598010497" watchObservedRunningTime="2024-02-09 18:59:44.543419278 +0000 UTC m=+53.598406093" Feb 9 18:59:45.393825 kubelet[1386]: E0209 18:59:45.393780 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:46.394927 kubelet[1386]: E0209 18:59:46.394874 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:47.395169 kubelet[1386]: E0209 18:59:47.395085 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:48.395623 kubelet[1386]: E0209 18:59:48.395575 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:48.599122 env[1114]: time="2024-02-09T18:59:48.599065838Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:59:48.604331 env[1114]: time="2024-02-09T18:59:48.604287126Z" level=info msg="StopContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" with timeout 1 (s)" Feb 9 18:59:48.604527 env[1114]: time="2024-02-09T18:59:48.604502141Z" level=info msg="Stop container \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" with signal terminated" Feb 9 18:59:48.609622 systemd-networkd[1014]: lxc_health: Link DOWN Feb 9 18:59:48.609627 systemd-networkd[1014]: lxc_health: Lost carrier Feb 9 18:59:48.649365 systemd[1]: cri-containerd-c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb.scope: Deactivated successfully. Feb 9 18:59:48.649599 systemd[1]: cri-containerd-c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb.scope: Consumed 5.898s CPU time. Feb 9 18:59:48.665220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb-rootfs.mount: Deactivated successfully. Feb 9 18:59:48.673565 env[1114]: time="2024-02-09T18:59:48.673526668Z" level=info msg="shim disconnected" id=c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb Feb 9 18:59:48.673757 env[1114]: time="2024-02-09T18:59:48.673731514Z" level=warning msg="cleaning up after shim disconnected" id=c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb namespace=k8s.io Feb 9 18:59:48.673757 env[1114]: time="2024-02-09T18:59:48.673751952Z" level=info msg="cleaning up dead shim" Feb 9 18:59:48.681028 env[1114]: time="2024-02-09T18:59:48.680982243Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868 runtime=io.containerd.runc.v2\n" Feb 9 18:59:48.683731 env[1114]: time="2024-02-09T18:59:48.683699808Z" level=info msg="StopContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" returns successfully" Feb 9 18:59:48.684294 env[1114]: time="2024-02-09T18:59:48.684268979Z" level=info msg="StopPodSandbox for \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\"" Feb 9 18:59:48.684351 env[1114]: time="2024-02-09T18:59:48.684335695Z" level=info msg="Container to stop \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:48.684379 env[1114]: time="2024-02-09T18:59:48.684348810Z" level=info msg="Container to stop \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:48.684379 env[1114]: time="2024-02-09T18:59:48.684359921Z" level=info msg="Container to stop \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:48.684379 env[1114]: time="2024-02-09T18:59:48.684368407Z" level=info msg="Container to stop \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:48.684379 env[1114]: time="2024-02-09T18:59:48.684377343Z" level=info msg="Container to stop \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:48.685609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197-shm.mount: Deactivated successfully. Feb 9 18:59:48.689377 systemd[1]: cri-containerd-301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197.scope: Deactivated successfully. Feb 9 18:59:48.706597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197-rootfs.mount: Deactivated successfully. Feb 9 18:59:48.709607 env[1114]: time="2024-02-09T18:59:48.709558296Z" level=info msg="shim disconnected" id=301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197 Feb 9 18:59:48.709607 env[1114]: time="2024-02-09T18:59:48.709603120Z" level=warning msg="cleaning up after shim disconnected" id=301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197 namespace=k8s.io Feb 9 18:59:48.709607 env[1114]: time="2024-02-09T18:59:48.709611837Z" level=info msg="cleaning up dead shim" Feb 9 18:59:48.716308 env[1114]: time="2024-02-09T18:59:48.716263028Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2899 runtime=io.containerd.runc.v2\n" Feb 9 18:59:48.716540 env[1114]: time="2024-02-09T18:59:48.716510684Z" level=info msg="TearDown network for sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" successfully" Feb 9 18:59:48.716540 env[1114]: time="2024-02-09T18:59:48.716533687Z" level=info msg="StopPodSandbox for \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" returns successfully" Feb 9 18:59:48.849546 kubelet[1386]: I0209 18:59:48.849494 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-xtables-lock\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849546 kubelet[1386]: I0209 18:59:48.849554 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9vr\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-kube-api-access-ck9vr\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849579 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cni-path\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849579 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849602 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b58da30-253a-48a8-84ec-f32c04b4029a-clustermesh-secrets\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849673 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-config-path\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849700 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-kernel\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.849870 kubelet[1386]: I0209 18:59:48.849726 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-lib-modules\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849743 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-hostproc\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849763 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-bpf-maps\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849788 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-etc-cni-netd\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849803 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-run\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849819 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-hubble-tls\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850095 kubelet[1386]: I0209 18:59:48.849835 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-net\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850295 kubelet[1386]: I0209 18:59:48.849849 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-cgroup\") pod \"1b58da30-253a-48a8-84ec-f32c04b4029a\" (UID: \"1b58da30-253a-48a8-84ec-f32c04b4029a\") " Feb 9 18:59:48.850295 kubelet[1386]: I0209 18:59:48.849879 1386 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-xtables-lock\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.850295 kubelet[1386]: I0209 18:59:48.849905 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850295 kubelet[1386]: W0209 18:59:48.850010 1386 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1b58da30-253a-48a8-84ec-f32c04b4029a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:59:48.850295 kubelet[1386]: I0209 18:59:48.850018 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850295 kubelet[1386]: I0209 18:59:48.850086 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850513 kubelet[1386]: I0209 18:59:48.850111 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850513 kubelet[1386]: I0209 18:59:48.850140 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850513 kubelet[1386]: I0209 18:59:48.850165 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850513 kubelet[1386]: I0209 18:59:48.850223 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850513 kubelet[1386]: I0209 18:59:48.850242 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.850702 kubelet[1386]: I0209 18:59:48.850262 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:48.851651 kubelet[1386]: I0209 18:59:48.851623 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:59:48.852964 kubelet[1386]: I0209 18:59:48.852394 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b58da30-253a-48a8-84ec-f32c04b4029a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:48.852964 kubelet[1386]: I0209 18:59:48.852921 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:48.853493 systemd[1]: var-lib-kubelet-pods-1b58da30\x2d253a\x2d48a8\x2d84ec\x2df32c04b4029a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:48.853925 kubelet[1386]: I0209 18:59:48.853708 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-kube-api-access-ck9vr" (OuterVolumeSpecName: "kube-api-access-ck9vr") pod "1b58da30-253a-48a8-84ec-f32c04b4029a" (UID: "1b58da30-253a-48a8-84ec-f32c04b4029a"). InnerVolumeSpecName "kube-api-access-ck9vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950059 1386 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-kernel\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950102 1386 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-lib-modules\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950115 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-config-path\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950124 1386 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-host-proc-sys-net\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950135 1386 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-hostproc\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950143 1386 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-bpf-maps\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950151 1386 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-etc-cni-netd\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950194 kubelet[1386]: I0209 18:59:48.950159 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-run\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950557 kubelet[1386]: I0209 18:59:48.950168 1386 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-hubble-tls\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950557 kubelet[1386]: I0209 18:59:48.950176 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cilium-cgroup\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950557 kubelet[1386]: I0209 18:59:48.950184 1386 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ck9vr\" (UniqueName: \"kubernetes.io/projected/1b58da30-253a-48a8-84ec-f32c04b4029a-kube-api-access-ck9vr\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950557 kubelet[1386]: I0209 18:59:48.950192 1386 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b58da30-253a-48a8-84ec-f32c04b4029a-cni-path\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:48.950557 kubelet[1386]: I0209 18:59:48.950200 1386 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b58da30-253a-48a8-84ec-f32c04b4029a-clustermesh-secrets\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:49.395990 kubelet[1386]: E0209 18:59:49.395916 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:49.437002 systemd[1]: Removed slice kubepods-burstable-pod1b58da30_253a_48a8_84ec_f32c04b4029a.slice. Feb 9 18:59:49.437115 systemd[1]: kubepods-burstable-pod1b58da30_253a_48a8_84ec_f32c04b4029a.slice: Consumed 5.978s CPU time. Feb 9 18:59:49.547517 kubelet[1386]: I0209 18:59:49.547487 1386 scope.go:115] "RemoveContainer" containerID="c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb" Feb 9 18:59:49.548954 env[1114]: time="2024-02-09T18:59:49.548914403Z" level=info msg="RemoveContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\"" Feb 9 18:59:49.554029 env[1114]: time="2024-02-09T18:59:49.553980568Z" level=info msg="RemoveContainer for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" returns successfully" Feb 9 18:59:49.554249 kubelet[1386]: I0209 18:59:49.554227 1386 scope.go:115] "RemoveContainer" containerID="76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a" Feb 9 18:59:49.555117 env[1114]: time="2024-02-09T18:59:49.555080878Z" level=info msg="RemoveContainer for \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\"" Feb 9 18:59:49.558101 env[1114]: time="2024-02-09T18:59:49.558066497Z" level=info msg="RemoveContainer for \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\" returns successfully" Feb 9 18:59:49.558207 kubelet[1386]: I0209 18:59:49.558190 1386 scope.go:115] "RemoveContainer" containerID="c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd" Feb 9 18:59:49.559178 env[1114]: time="2024-02-09T18:59:49.559143434Z" level=info msg="RemoveContainer for \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\"" Feb 9 18:59:49.561739 env[1114]: time="2024-02-09T18:59:49.561705866Z" level=info msg="RemoveContainer for \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\" returns successfully" Feb 9 18:59:49.562161 kubelet[1386]: I0209 18:59:49.562139 1386 scope.go:115] "RemoveContainer" containerID="e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146" Feb 9 18:59:49.563382 env[1114]: time="2024-02-09T18:59:49.563351503Z" level=info msg="RemoveContainer for \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\"" Feb 9 18:59:49.565700 env[1114]: time="2024-02-09T18:59:49.565678603Z" level=info msg="RemoveContainer for \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\" returns successfully" Feb 9 18:59:49.565800 kubelet[1386]: I0209 18:59:49.565776 1386 scope.go:115] "RemoveContainer" containerID="667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc" Feb 9 18:59:49.566539 env[1114]: time="2024-02-09T18:59:49.566516130Z" level=info msg="RemoveContainer for \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\"" Feb 9 18:59:49.568861 env[1114]: time="2024-02-09T18:59:49.568838750Z" level=info msg="RemoveContainer for \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\" returns successfully" Feb 9 18:59:49.568988 kubelet[1386]: I0209 18:59:49.568953 1386 scope.go:115] "RemoveContainer" containerID="c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb" Feb 9 18:59:49.569180 env[1114]: time="2024-02-09T18:59:49.569102006Z" level=error msg="ContainerStatus for \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\": not found" Feb 9 18:59:49.569297 kubelet[1386]: E0209 18:59:49.569280 1386 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\": not found" containerID="c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb" Feb 9 18:59:49.569335 kubelet[1386]: I0209 18:59:49.569325 1386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb} err="failed to get container status \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c18d192341f3f0f4de71057982be0b70b9149c83402d0d37b1f425d665dc9fcb\": not found" Feb 9 18:59:49.569335 kubelet[1386]: I0209 18:59:49.569335 1386 scope.go:115] "RemoveContainer" containerID="76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a" Feb 9 18:59:49.569517 env[1114]: time="2024-02-09T18:59:49.569476291Z" level=error msg="ContainerStatus for \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\": not found" Feb 9 18:59:49.569607 kubelet[1386]: E0209 18:59:49.569592 1386 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\": not found" containerID="76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a" Feb 9 18:59:49.569635 kubelet[1386]: I0209 18:59:49.569615 1386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a} err="failed to get container status \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\": rpc error: code = NotFound desc = an error occurred when try to find container \"76d882bc612075bd5f07322a0e9bed676045db086c9c5bd139b2e185008f822a\": not found" Feb 9 18:59:49.569635 kubelet[1386]: I0209 18:59:49.569624 1386 scope.go:115] "RemoveContainer" containerID="c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd" Feb 9 18:59:49.569770 env[1114]: time="2024-02-09T18:59:49.569740407Z" level=error msg="ContainerStatus for \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\": not found" Feb 9 18:59:49.569876 kubelet[1386]: E0209 18:59:49.569856 1386 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\": not found" containerID="c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd" Feb 9 18:59:49.569932 kubelet[1386]: I0209 18:59:49.569886 1386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd} err="failed to get container status \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c012904de78481ffa8fe8a8ec125116d4fd2182773ec5d18ec6b66ba958d3bbd\": not found" Feb 9 18:59:49.569932 kubelet[1386]: I0209 18:59:49.569893 1386 scope.go:115] "RemoveContainer" containerID="e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146" Feb 9 18:59:49.570053 env[1114]: time="2024-02-09T18:59:49.569997772Z" level=error msg="ContainerStatus for \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\": not found" Feb 9 18:59:49.570369 kubelet[1386]: E0209 18:59:49.570175 1386 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\": not found" containerID="e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146" Feb 9 18:59:49.570435 kubelet[1386]: I0209 18:59:49.570378 1386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146} err="failed to get container status \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\": rpc error: code = NotFound desc = an error occurred when try to find container \"e56f7451ed87165913bf7c80faaeeed0335127e1056d32611c52b1ca495a7146\": not found" Feb 9 18:59:49.570435 kubelet[1386]: I0209 18:59:49.570387 1386 scope.go:115] "RemoveContainer" containerID="667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc" Feb 9 18:59:49.572721 env[1114]: time="2024-02-09T18:59:49.572671764Z" level=error msg="ContainerStatus for \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\": not found" Feb 9 18:59:49.572866 kubelet[1386]: E0209 18:59:49.572848 1386 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\": not found" containerID="667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc" Feb 9 18:59:49.572923 kubelet[1386]: I0209 18:59:49.572880 1386 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc} err="failed to get container status \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"667d4116dec00b18b0066f638bb8725a463dda07582d3a2a0399047f796bd7fc\": not found" Feb 9 18:59:49.589364 systemd[1]: var-lib-kubelet-pods-1b58da30\x2d253a\x2d48a8\x2d84ec\x2df32c04b4029a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dck9vr.mount: Deactivated successfully. Feb 9 18:59:49.589451 systemd[1]: var-lib-kubelet-pods-1b58da30\x2d253a\x2d48a8\x2d84ec\x2df32c04b4029a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:59:50.396953 kubelet[1386]: E0209 18:59:50.396878 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:51.158407 kubelet[1386]: I0209 18:59:51.158366 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:51.158574 kubelet[1386]: E0209 18:59:51.158447 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="apply-sysctl-overwrites" Feb 9 18:59:51.158574 kubelet[1386]: E0209 18:59:51.158458 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="cilium-agent" Feb 9 18:59:51.158574 kubelet[1386]: E0209 18:59:51.158467 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="mount-cgroup" Feb 9 18:59:51.158574 kubelet[1386]: E0209 18:59:51.158474 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="mount-bpf-fs" Feb 9 18:59:51.158574 kubelet[1386]: E0209 18:59:51.158482 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="clean-cilium-state" Feb 9 18:59:51.158574 kubelet[1386]: I0209 18:59:51.158502 1386 memory_manager.go:346] "RemoveStaleState removing state" podUID="1b58da30-253a-48a8-84ec-f32c04b4029a" containerName="cilium-agent" Feb 9 18:59:51.162718 systemd[1]: Created slice kubepods-besteffort-podae521a49_c083_41af_ab46_9c37870e822b.slice. Feb 9 18:59:51.170835 kubelet[1386]: I0209 18:59:51.170817 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:51.174808 systemd[1]: Created slice kubepods-burstable-pod714ad9a2_9623_4948_9ead_e3b5971afcef.slice. Feb 9 18:59:51.260820 kubelet[1386]: I0209 18:59:51.260767 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-xtables-lock\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.260936 kubelet[1386]: I0209 18:59:51.260845 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-ipsec-secrets\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.260936 kubelet[1386]: I0209 18:59:51.260873 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-net\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.260936 kubelet[1386]: I0209 18:59:51.260897 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2kcv\" (UniqueName: \"kubernetes.io/projected/ae521a49-c083-41af-ab46-9c37870e822b-kube-api-access-j2kcv\") pod \"cilium-operator-574c4bb98d-z4xz2\" (UID: \"ae521a49-c083-41af-ab46-9c37870e822b\") " pod="kube-system/cilium-operator-574c4bb98d-z4xz2" Feb 9 18:59:51.260936 kubelet[1386]: I0209 18:59:51.260916 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-run\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261061 kubelet[1386]: I0209 18:59:51.260957 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cni-path\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261061 kubelet[1386]: I0209 18:59:51.261024 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-config-path\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261117 kubelet[1386]: I0209 18:59:51.261097 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-hubble-tls\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261171 kubelet[1386]: I0209 18:59:51.261154 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-bpf-maps\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261234 kubelet[1386]: I0209 18:59:51.261190 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-hostproc\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261234 kubelet[1386]: I0209 18:59:51.261218 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-cgroup\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261297 kubelet[1386]: I0209 18:59:51.261251 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-lib-modules\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261327 kubelet[1386]: I0209 18:59:51.261297 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-clustermesh-secrets\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261327 kubelet[1386]: I0209 18:59:51.261324 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-kernel\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261373 kubelet[1386]: I0209 18:59:51.261343 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmn8\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-kube-api-access-7dmn8\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.261373 kubelet[1386]: I0209 18:59:51.261362 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae521a49-c083-41af-ab46-9c37870e822b-cilium-config-path\") pod \"cilium-operator-574c4bb98d-z4xz2\" (UID: \"ae521a49-c083-41af-ab46-9c37870e822b\") " pod="kube-system/cilium-operator-574c4bb98d-z4xz2" Feb 9 18:59:51.261423 kubelet[1386]: I0209 18:59:51.261405 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-etc-cni-netd\") pod \"cilium-p7t89\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " pod="kube-system/cilium-p7t89" Feb 9 18:59:51.361771 kubelet[1386]: E0209 18:59:51.361724 1386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:51.367025 env[1114]: time="2024-02-09T18:59:51.366982540Z" level=info msg="StopPodSandbox for \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\"" Feb 9 18:59:51.367333 env[1114]: time="2024-02-09T18:59:51.367092888Z" level=info msg="TearDown network for sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" successfully" Feb 9 18:59:51.367333 env[1114]: time="2024-02-09T18:59:51.367133975Z" level=info msg="StopPodSandbox for \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" returns successfully" Feb 9 18:59:51.367488 env[1114]: time="2024-02-09T18:59:51.367455240Z" level=info msg="RemovePodSandbox for \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\"" Feb 9 18:59:51.367633 env[1114]: time="2024-02-09T18:59:51.367484215Z" level=info msg="Forcibly stopping sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\"" Feb 9 18:59:51.367633 env[1114]: time="2024-02-09T18:59:51.367543396Z" level=info msg="TearDown network for sandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" successfully" Feb 9 18:59:51.370459 env[1114]: time="2024-02-09T18:59:51.370410430Z" level=info msg="RemovePodSandbox \"301ac9dd614273255c297b1fb2ee75ce39e4ac728d7dc54cbe0989487af4f197\" returns successfully" Feb 9 18:59:51.397055 kubelet[1386]: E0209 18:59:51.396985 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:51.410374 kubelet[1386]: E0209 18:59:51.410262 1386 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:59:51.434081 kubelet[1386]: I0209 18:59:51.434051 1386 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1b58da30-253a-48a8-84ec-f32c04b4029a path="/var/lib/kubelet/pods/1b58da30-253a-48a8-84ec-f32c04b4029a/volumes" Feb 9 18:59:51.464973 kubelet[1386]: E0209 18:59:51.464935 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:51.465531 env[1114]: time="2024-02-09T18:59:51.465483236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-z4xz2,Uid:ae521a49-c083-41af-ab46-9c37870e822b,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:51.476543 env[1114]: time="2024-02-09T18:59:51.476467459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:51.476543 env[1114]: time="2024-02-09T18:59:51.476514768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:51.476543 env[1114]: time="2024-02-09T18:59:51.476528774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:51.476742 env[1114]: time="2024-02-09T18:59:51.476663287Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b863195b89599119e43ec17a7d42cc44a077aa0ac3dcd3f2c32a8712a8ba1421 pid=2929 runtime=io.containerd.runc.v2 Feb 9 18:59:51.486938 systemd[1]: Started cri-containerd-b863195b89599119e43ec17a7d42cc44a077aa0ac3dcd3f2c32a8712a8ba1421.scope. Feb 9 18:59:51.491794 kubelet[1386]: E0209 18:59:51.491752 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:51.492344 env[1114]: time="2024-02-09T18:59:51.492288781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7t89,Uid:714ad9a2-9623-4948-9ead-e3b5971afcef,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:51.504720 env[1114]: time="2024-02-09T18:59:51.504576917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:51.504720 env[1114]: time="2024-02-09T18:59:51.504621771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:51.504720 env[1114]: time="2024-02-09T18:59:51.504632241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:51.505203 env[1114]: time="2024-02-09T18:59:51.505050678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217 pid=2962 runtime=io.containerd.runc.v2 Feb 9 18:59:51.519649 systemd[1]: Started cri-containerd-c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217.scope. Feb 9 18:59:51.527941 env[1114]: time="2024-02-09T18:59:51.527894231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-z4xz2,Uid:ae521a49-c083-41af-ab46-9c37870e822b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b863195b89599119e43ec17a7d42cc44a077aa0ac3dcd3f2c32a8712a8ba1421\"" Feb 9 18:59:51.528735 kubelet[1386]: E0209 18:59:51.528713 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:51.530414 env[1114]: time="2024-02-09T18:59:51.529739261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:59:51.539956 env[1114]: time="2024-02-09T18:59:51.539909944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7t89,Uid:714ad9a2-9623-4948-9ead-e3b5971afcef,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\"" Feb 9 18:59:51.540872 kubelet[1386]: E0209 18:59:51.540821 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:51.543206 env[1114]: time="2024-02-09T18:59:51.543165258Z" level=info msg="CreateContainer within sandbox \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:59:51.556533 env[1114]: time="2024-02-09T18:59:51.556509240Z" level=info msg="CreateContainer within sandbox \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\"" Feb 9 18:59:51.556948 env[1114]: time="2024-02-09T18:59:51.556908722Z" level=info msg="StartContainer for \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\"" Feb 9 18:59:51.568667 systemd[1]: Started cri-containerd-f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9.scope. Feb 9 18:59:51.579335 systemd[1]: cri-containerd-f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9.scope: Deactivated successfully. Feb 9 18:59:51.579518 systemd[1]: Stopped cri-containerd-f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9.scope. Feb 9 18:59:51.591930 env[1114]: time="2024-02-09T18:59:51.591890477Z" level=info msg="shim disconnected" id=f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9 Feb 9 18:59:51.592076 env[1114]: time="2024-02-09T18:59:51.591935823Z" level=warning msg="cleaning up after shim disconnected" id=f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9 namespace=k8s.io Feb 9 18:59:51.592076 env[1114]: time="2024-02-09T18:59:51.591944189Z" level=info msg="cleaning up dead shim" Feb 9 18:59:51.598236 env[1114]: time="2024-02-09T18:59:51.598204705Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3031 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:59:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:59:51.598493 env[1114]: time="2024-02-09T18:59:51.598409862Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Feb 9 18:59:51.599121 env[1114]: time="2024-02-09T18:59:51.599084571Z" level=error msg="Failed to pipe stderr of container \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\"" error="reading from a closed fifo" Feb 9 18:59:51.601906 env[1114]: time="2024-02-09T18:59:51.601837159Z" level=error msg="Failed to pipe stdout of container \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\"" error="reading from a closed fifo" Feb 9 18:59:51.603982 env[1114]: time="2024-02-09T18:59:51.603945245Z" level=error msg="StartContainer for \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:59:51.604173 kubelet[1386]: E0209 18:59:51.604155 1386 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9" Feb 9 18:59:51.604283 kubelet[1386]: E0209 18:59:51.604271 1386 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:59:51.604283 kubelet[1386]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:59:51.604283 kubelet[1386]: rm /hostbin/cilium-mount Feb 9 18:59:51.604355 kubelet[1386]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7dmn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-p7t89_kube-system(714ad9a2-9623-4948-9ead-e3b5971afcef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:59:51.604355 kubelet[1386]: E0209 18:59:51.604307 1386 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-p7t89" podUID=714ad9a2-9623-4948-9ead-e3b5971afcef Feb 9 18:59:52.398093 kubelet[1386]: E0209 18:59:52.398050 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:52.555258 env[1114]: time="2024-02-09T18:59:52.555213655Z" level=info msg="StopPodSandbox for \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\"" Feb 9 18:59:52.555568 env[1114]: time="2024-02-09T18:59:52.555269941Z" level=info msg="Container to stop \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:52.556778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217-shm.mount: Deactivated successfully. Feb 9 18:59:52.560577 systemd[1]: cri-containerd-c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217.scope: Deactivated successfully. Feb 9 18:59:52.575138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217-rootfs.mount: Deactivated successfully. Feb 9 18:59:52.580497 env[1114]: time="2024-02-09T18:59:52.580454576Z" level=info msg="shim disconnected" id=c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217 Feb 9 18:59:52.581121 env[1114]: time="2024-02-09T18:59:52.581081075Z" level=warning msg="cleaning up after shim disconnected" id=c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217 namespace=k8s.io Feb 9 18:59:52.581121 env[1114]: time="2024-02-09T18:59:52.581117103Z" level=info msg="cleaning up dead shim" Feb 9 18:59:52.591068 env[1114]: time="2024-02-09T18:59:52.591000521Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3061 runtime=io.containerd.runc.v2\n" Feb 9 18:59:52.591334 env[1114]: time="2024-02-09T18:59:52.591300365Z" level=info msg="TearDown network for sandbox \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\" successfully" Feb 9 18:59:52.591334 env[1114]: time="2024-02-09T18:59:52.591323909Z" level=info msg="StopPodSandbox for \"c0d9372b63ec6e4d6cfb08878fc5f6a7d0e87d99bafbed5fc09d060332870217\" returns successfully" Feb 9 18:59:52.694569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363019758.mount: Deactivated successfully. Feb 9 18:59:52.771429 kubelet[1386]: I0209 18:59:52.771377 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-net\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771429 kubelet[1386]: I0209 18:59:52.771422 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-bpf-maps\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771429 kubelet[1386]: I0209 18:59:52.771441 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-kernel\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771467 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-config-path\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771484 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-hostproc\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771499 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-xtables-lock\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771518 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-ipsec-secrets\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771506 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771533 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cni-path\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771548 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-etc-cni-netd\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771556 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771569 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-hubble-tls\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771572 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771593 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-cgroup\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771610 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmn8\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-kube-api-access-7dmn8\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771626 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-run\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771658 kubelet[1386]: I0209 18:59:52.771642 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-lib-modules\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771977 kubelet[1386]: I0209 18:59:52.771673 1386 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-clustermesh-secrets\") pod \"714ad9a2-9623-4948-9ead-e3b5971afcef\" (UID: \"714ad9a2-9623-4948-9ead-e3b5971afcef\") " Feb 9 18:59:52.771977 kubelet[1386]: I0209 18:59:52.771701 1386 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-xtables-lock\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.771977 kubelet[1386]: I0209 18:59:52.771711 1386 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-bpf-maps\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.771977 kubelet[1386]: I0209 18:59:52.771721 1386 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-kernel\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.771977 kubelet[1386]: W0209 18:59:52.771720 1386 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/714ad9a2-9623-4948-9ead-e3b5971afcef/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773420 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773452 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-hostproc" (OuterVolumeSpecName: "hostproc") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773469 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773482 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cni-path" (OuterVolumeSpecName: "cni-path") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773495 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773802 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773823 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.773966 kubelet[1386]: I0209 18:59:52.773849 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:52.774639 kubelet[1386]: I0209 18:59:52.774616 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:52.775063 kubelet[1386]: I0209 18:59:52.775026 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:52.776012 kubelet[1386]: I0209 18:59:52.775992 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:52.776448 kubelet[1386]: I0209 18:59:52.776424 1386 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-kube-api-access-7dmn8" (OuterVolumeSpecName: "kube-api-access-7dmn8") pod "714ad9a2-9623-4948-9ead-e3b5971afcef" (UID: "714ad9a2-9623-4948-9ead-e3b5971afcef"). InnerVolumeSpecName "kube-api-access-7dmn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:52.871997 kubelet[1386]: I0209 18:59:52.871955 1386 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-etc-cni-netd\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.871997 kubelet[1386]: I0209 18:59:52.871991 1386 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-hubble-tls\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.871997 kubelet[1386]: I0209 18:59:52.872003 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-cgroup\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872012 1386 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7dmn8\" (UniqueName: \"kubernetes.io/projected/714ad9a2-9623-4948-9ead-e3b5971afcef-kube-api-access-7dmn8\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872021 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-run\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872029 1386 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-lib-modules\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872050 1386 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-clustermesh-secrets\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872059 1386 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-host-proc-sys-net\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872067 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-config-path\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872075 1386 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-hostproc\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872083 1386 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/714ad9a2-9623-4948-9ead-e3b5971afcef-cilium-ipsec-secrets\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:52.872195 kubelet[1386]: I0209 18:59:52.872090 1386 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/714ad9a2-9623-4948-9ead-e3b5971afcef-cni-path\") on node \"10.0.0.120\" DevicePath \"\"" Feb 9 18:59:53.345744 env[1114]: time="2024-02-09T18:59:53.345676239Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:53.347175 env[1114]: time="2024-02-09T18:59:53.347147285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:53.348471 env[1114]: time="2024-02-09T18:59:53.348411733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:53.348930 env[1114]: time="2024-02-09T18:59:53.348892977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 18:59:53.350488 env[1114]: time="2024-02-09T18:59:53.350461277Z" level=info msg="CreateContainer within sandbox \"b863195b89599119e43ec17a7d42cc44a077aa0ac3dcd3f2c32a8712a8ba1421\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:59:53.360092 env[1114]: time="2024-02-09T18:59:53.360050709Z" level=info msg="CreateContainer within sandbox \"b863195b89599119e43ec17a7d42cc44a077aa0ac3dcd3f2c32a8712a8ba1421\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ed536f712671e5ccb5c8b25d92a04e653dd82c0bea82528fa9e64b08a36a9171\"" Feb 9 18:59:53.360375 env[1114]: time="2024-02-09T18:59:53.360350262Z" level=info msg="StartContainer for \"ed536f712671e5ccb5c8b25d92a04e653dd82c0bea82528fa9e64b08a36a9171\"" Feb 9 18:59:53.374129 systemd[1]: var-lib-kubelet-pods-714ad9a2\x2d9623\x2d4948\x2d9ead\x2de3b5971afcef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:53.374240 systemd[1]: var-lib-kubelet-pods-714ad9a2\x2d9623\x2d4948\x2d9ead\x2de3b5971afcef-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:53.374316 systemd[1]: var-lib-kubelet-pods-714ad9a2\x2d9623\x2d4948\x2d9ead\x2de3b5971afcef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:59:53.374389 systemd[1]: var-lib-kubelet-pods-714ad9a2\x2d9623\x2d4948\x2d9ead\x2de3b5971afcef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7dmn8.mount: Deactivated successfully. Feb 9 18:59:53.378481 systemd[1]: Started cri-containerd-ed536f712671e5ccb5c8b25d92a04e653dd82c0bea82528fa9e64b08a36a9171.scope. Feb 9 18:59:53.398200 env[1114]: time="2024-02-09T18:59:53.398155862Z" level=info msg="StartContainer for \"ed536f712671e5ccb5c8b25d92a04e653dd82c0bea82528fa9e64b08a36a9171\" returns successfully" Feb 9 18:59:53.398501 kubelet[1386]: E0209 18:59:53.398318 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:53.438811 systemd[1]: Removed slice kubepods-burstable-pod714ad9a2_9623_4948_9ead_e3b5971afcef.slice. Feb 9 18:59:53.557798 kubelet[1386]: I0209 18:59:53.557769 1386 scope.go:115] "RemoveContainer" containerID="f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9" Feb 9 18:59:53.558974 env[1114]: time="2024-02-09T18:59:53.558936385Z" level=info msg="RemoveContainer for \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\"" Feb 9 18:59:53.560331 kubelet[1386]: E0209 18:59:53.560304 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:53.561684 env[1114]: time="2024-02-09T18:59:53.561644448Z" level=info msg="RemoveContainer for \"f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9\" returns successfully" Feb 9 18:59:53.578181 kubelet[1386]: I0209 18:59:53.578149 1386 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:59:53.578263 kubelet[1386]: E0209 18:59:53.578216 1386 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="714ad9a2-9623-4948-9ead-e3b5971afcef" containerName="mount-cgroup" Feb 9 18:59:53.578263 kubelet[1386]: I0209 18:59:53.578246 1386 memory_manager.go:346] "RemoveStaleState removing state" podUID="714ad9a2-9623-4948-9ead-e3b5971afcef" containerName="mount-cgroup" Feb 9 18:59:53.582477 systemd[1]: Created slice kubepods-burstable-pod4eaa745e_4851_49ab_9877_e4320cb15b72.slice. Feb 9 18:59:53.592606 kubelet[1386]: I0209 18:59:53.592585 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-z4xz2" podStartSLOduration=0.772798551 podCreationTimestamp="2024-02-09 18:59:51 +0000 UTC" firstStartedPulling="2024-02-09 18:59:51.529410252 +0000 UTC m=+60.584397077" lastFinishedPulling="2024-02-09 18:59:53.349164649 +0000 UTC m=+62.404151474" observedRunningTime="2024-02-09 18:59:53.584291904 +0000 UTC m=+62.639278729" watchObservedRunningTime="2024-02-09 18:59:53.592552948 +0000 UTC m=+62.647539773" Feb 9 18:59:53.675839 kubelet[1386]: I0209 18:59:53.675760 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-hostproc\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675839 kubelet[1386]: I0209 18:59:53.675793 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-cilium-cgroup\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675839 kubelet[1386]: I0209 18:59:53.675812 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-cni-path\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675839 kubelet[1386]: I0209 18:59:53.675829 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-lib-modules\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675990 kubelet[1386]: I0209 18:59:53.675847 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eaa745e-4851-49ab-9877-e4320cb15b72-cilium-config-path\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675990 kubelet[1386]: I0209 18:59:53.675910 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4eaa745e-4851-49ab-9877-e4320cb15b72-cilium-ipsec-secrets\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675990 kubelet[1386]: I0209 18:59:53.675954 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-cilium-run\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675990 kubelet[1386]: I0209 18:59:53.675973 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-xtables-lock\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.675990 kubelet[1386]: I0209 18:59:53.675989 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eaa745e-4851-49ab-9877-e4320cb15b72-clustermesh-secrets\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676008 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-host-proc-sys-kernel\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676023 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eaa745e-4851-49ab-9877-e4320cb15b72-hubble-tls\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676075 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx589\" (UniqueName: \"kubernetes.io/projected/4eaa745e-4851-49ab-9877-e4320cb15b72-kube-api-access-xx589\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676090 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-etc-cni-netd\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676107 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-host-proc-sys-net\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.676152 kubelet[1386]: I0209 18:59:53.676126 1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eaa745e-4851-49ab-9877-e4320cb15b72-bpf-maps\") pod \"cilium-gv7pk\" (UID: \"4eaa745e-4851-49ab-9877-e4320cb15b72\") " pod="kube-system/cilium-gv7pk" Feb 9 18:59:53.888856 kubelet[1386]: E0209 18:59:53.888819 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:53.889328 env[1114]: time="2024-02-09T18:59:53.889283553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gv7pk,Uid:4eaa745e-4851-49ab-9877-e4320cb15b72,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:53.899900 env[1114]: time="2024-02-09T18:59:53.899837039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:53.899900 env[1114]: time="2024-02-09T18:59:53.899876092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:53.899900 env[1114]: time="2024-02-09T18:59:53.899886181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:53.900120 env[1114]: time="2024-02-09T18:59:53.900013781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783 pid=3128 runtime=io.containerd.runc.v2 Feb 9 18:59:53.911679 systemd[1]: Started cri-containerd-d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783.scope. Feb 9 18:59:53.932346 env[1114]: time="2024-02-09T18:59:53.932230601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gv7pk,Uid:4eaa745e-4851-49ab-9877-e4320cb15b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\"" Feb 9 18:59:53.933257 kubelet[1386]: E0209 18:59:53.933046 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:53.934595 env[1114]: time="2024-02-09T18:59:53.934566113Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:59:54.178180 env[1114]: time="2024-02-09T18:59:54.178124032Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d\"" Feb 9 18:59:54.178717 env[1114]: time="2024-02-09T18:59:54.178692100Z" level=info msg="StartContainer for \"fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d\"" Feb 9 18:59:54.191930 systemd[1]: Started cri-containerd-fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d.scope. Feb 9 18:59:54.214089 env[1114]: time="2024-02-09T18:59:54.213986015Z" level=info msg="StartContainer for \"fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d\" returns successfully" Feb 9 18:59:54.217436 systemd[1]: cri-containerd-fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d.scope: Deactivated successfully. Feb 9 18:59:54.235025 env[1114]: time="2024-02-09T18:59:54.234979855Z" level=info msg="shim disconnected" id=fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d Feb 9 18:59:54.235025 env[1114]: time="2024-02-09T18:59:54.235020511Z" level=warning msg="cleaning up after shim disconnected" id=fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d namespace=k8s.io Feb 9 18:59:54.235264 env[1114]: time="2024-02-09T18:59:54.235028827Z" level=info msg="cleaning up dead shim" Feb 9 18:59:54.240751 env[1114]: time="2024-02-09T18:59:54.240714847Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3214 runtime=io.containerd.runc.v2\n" Feb 9 18:59:54.276420 kubelet[1386]: I0209 18:59:54.276396 1386 setters.go:548] "Node became not ready" node="10.0.0.120" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:59:54.276333844 +0000 UTC m=+63.331320669 LastTransitionTime:2024-02-09 18:59:54.276333844 +0000 UTC m=+63.331320669 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:59:54.399000 kubelet[1386]: E0209 18:59:54.398967 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:54.563312 kubelet[1386]: E0209 18:59:54.563289 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:54.563466 kubelet[1386]: E0209 18:59:54.563330 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:54.564712 env[1114]: time="2024-02-09T18:59:54.564683068Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:59:54.575291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096674860.mount: Deactivated successfully. Feb 9 18:59:54.577599 env[1114]: time="2024-02-09T18:59:54.577567703Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5\"" Feb 9 18:59:54.577919 env[1114]: time="2024-02-09T18:59:54.577892524Z" level=info msg="StartContainer for \"60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5\"" Feb 9 18:59:54.590760 systemd[1]: Started cri-containerd-60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5.scope. Feb 9 18:59:54.610152 env[1114]: time="2024-02-09T18:59:54.610110247Z" level=info msg="StartContainer for \"60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5\" returns successfully" Feb 9 18:59:54.612652 systemd[1]: cri-containerd-60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5.scope: Deactivated successfully. Feb 9 18:59:54.628469 env[1114]: time="2024-02-09T18:59:54.628423838Z" level=info msg="shim disconnected" id=60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5 Feb 9 18:59:54.628469 env[1114]: time="2024-02-09T18:59:54.628465968Z" level=warning msg="cleaning up after shim disconnected" id=60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5 namespace=k8s.io Feb 9 18:59:54.628647 env[1114]: time="2024-02-09T18:59:54.628474774Z" level=info msg="cleaning up dead shim" Feb 9 18:59:54.634223 env[1114]: time="2024-02-09T18:59:54.634170932Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" Feb 9 18:59:54.697391 kubelet[1386]: W0209 18:59:54.697341 1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod714ad9a2_9623_4948_9ead_e3b5971afcef.slice/cri-containerd-f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9.scope WatchSource:0}: container "f52fc289d2a5b5e0e04f0158ffd81cd785555c7418de21f6403330f7df65d9d9" in namespace "k8s.io": not found Feb 9 18:59:55.373169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5-rootfs.mount: Deactivated successfully. Feb 9 18:59:55.399955 kubelet[1386]: E0209 18:59:55.399900 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:55.433845 kubelet[1386]: I0209 18:59:55.433812 1386 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=714ad9a2-9623-4948-9ead-e3b5971afcef path="/var/lib/kubelet/pods/714ad9a2-9623-4948-9ead-e3b5971afcef/volumes" Feb 9 18:59:55.565788 kubelet[1386]: E0209 18:59:55.565760 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:55.567536 env[1114]: time="2024-02-09T18:59:55.567502864Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:59:55.686303 env[1114]: time="2024-02-09T18:59:55.686209562Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f\"" Feb 9 18:59:55.686759 env[1114]: time="2024-02-09T18:59:55.686716255Z" level=info msg="StartContainer for \"1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f\"" Feb 9 18:59:55.701059 systemd[1]: Started cri-containerd-1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f.scope. Feb 9 18:59:55.724474 env[1114]: time="2024-02-09T18:59:55.724415060Z" level=info msg="StartContainer for \"1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f\" returns successfully" Feb 9 18:59:55.724980 systemd[1]: cri-containerd-1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f.scope: Deactivated successfully. Feb 9 18:59:55.743862 env[1114]: time="2024-02-09T18:59:55.743813414Z" level=info msg="shim disconnected" id=1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f Feb 9 18:59:55.743862 env[1114]: time="2024-02-09T18:59:55.743854791Z" level=warning msg="cleaning up after shim disconnected" id=1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f namespace=k8s.io Feb 9 18:59:55.743862 env[1114]: time="2024-02-09T18:59:55.743863077Z" level=info msg="cleaning up dead shim" Feb 9 18:59:55.750307 env[1114]: time="2024-02-09T18:59:55.750281723Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3333 runtime=io.containerd.runc.v2\n" Feb 9 18:59:56.373100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f-rootfs.mount: Deactivated successfully. Feb 9 18:59:56.400835 kubelet[1386]: E0209 18:59:56.400802 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:56.411326 kubelet[1386]: E0209 18:59:56.411307 1386 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:59:56.568431 kubelet[1386]: E0209 18:59:56.568406 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:56.569865 env[1114]: time="2024-02-09T18:59:56.569831229Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:59:56.582190 env[1114]: time="2024-02-09T18:59:56.582139682Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e\"" Feb 9 18:59:56.582572 env[1114]: time="2024-02-09T18:59:56.582536899Z" level=info msg="StartContainer for \"819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e\"" Feb 9 18:59:56.595665 systemd[1]: Started cri-containerd-819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e.scope. Feb 9 18:59:56.614788 systemd[1]: cri-containerd-819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e.scope: Deactivated successfully. Feb 9 18:59:56.616923 env[1114]: time="2024-02-09T18:59:56.616888641Z" level=info msg="StartContainer for \"819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e\" returns successfully" Feb 9 18:59:56.632514 env[1114]: time="2024-02-09T18:59:56.632413278Z" level=info msg="shim disconnected" id=819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e Feb 9 18:59:56.632514 env[1114]: time="2024-02-09T18:59:56.632459867Z" level=warning msg="cleaning up after shim disconnected" id=819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e namespace=k8s.io Feb 9 18:59:56.632514 env[1114]: time="2024-02-09T18:59:56.632474634Z" level=info msg="cleaning up dead shim" Feb 9 18:59:56.637844 env[1114]: time="2024-02-09T18:59:56.637803529Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3387 runtime=io.containerd.runc.v2\n" Feb 9 18:59:57.374091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e-rootfs.mount: Deactivated successfully. Feb 9 18:59:57.401106 kubelet[1386]: E0209 18:59:57.401079 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:57.571143 kubelet[1386]: E0209 18:59:57.571125 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:57.572865 env[1114]: time="2024-02-09T18:59:57.572829546Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:59:57.587081 env[1114]: time="2024-02-09T18:59:57.587023318Z" level=info msg="CreateContainer within sandbox \"d74f07724bcf6f3779d552bf34c6f5f8eb30252d9330cc55b038b036aa8a2783\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57\"" Feb 9 18:59:57.587512 env[1114]: time="2024-02-09T18:59:57.587454038Z" level=info msg="StartContainer for \"aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57\"" Feb 9 18:59:57.603730 systemd[1]: Started cri-containerd-aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57.scope. Feb 9 18:59:57.628665 env[1114]: time="2024-02-09T18:59:57.628550876Z" level=info msg="StartContainer for \"aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57\" returns successfully" Feb 9 18:59:57.806335 kubelet[1386]: W0209 18:59:57.806290 1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eaa745e_4851_49ab_9877_e4320cb15b72.slice/cri-containerd-fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d.scope WatchSource:0}: task fe8ed3d0b6624b796f6c4dd3cb43da903856847340e092133273f72f274b371d not found: not found Feb 9 18:59:57.879072 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 18:59:58.373341 systemd[1]: run-containerd-runc-k8s.io-aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57-runc.JzU6UP.mount: Deactivated successfully. Feb 9 18:59:58.401820 kubelet[1386]: E0209 18:59:58.401772 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:58.575918 kubelet[1386]: E0209 18:59:58.575893 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:59:58.586215 kubelet[1386]: I0209 18:59:58.586183 1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gv7pk" podStartSLOduration=5.5861471179999995 podCreationTimestamp="2024-02-09 18:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:59:58.585801999 +0000 UTC m=+67.640788814" watchObservedRunningTime="2024-02-09 18:59:58.586147118 +0000 UTC m=+67.641133943" Feb 9 18:59:59.402356 kubelet[1386]: E0209 18:59:59.402297 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:59.889814 kubelet[1386]: E0209 18:59:59.889768 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:00:00.284944 systemd-networkd[1014]: lxc_health: Link UP Feb 9 19:00:00.294064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:00:00.294077 systemd-networkd[1014]: lxc_health: Gained carrier Feb 9 19:00:00.403293 kubelet[1386]: E0209 19:00:00.403229 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:00.911791 kubelet[1386]: W0209 19:00:00.911740 1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eaa745e_4851_49ab_9877_e4320cb15b72.slice/cri-containerd-60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5.scope WatchSource:0}: task 60b98ba1e237adf23749e9fea2bd6c892823e1d4fa1644094cbf13b6585685c5 not found: not found Feb 9 19:00:01.404270 kubelet[1386]: E0209 19:00:01.404202 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:01.667299 systemd-networkd[1014]: lxc_health: Gained IPv6LL Feb 9 19:00:01.719939 systemd[1]: run-containerd-runc-k8s.io-aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57-runc.fl5alS.mount: Deactivated successfully. Feb 9 19:00:01.890620 kubelet[1386]: E0209 19:00:01.890583 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:00:02.404360 kubelet[1386]: E0209 19:00:02.404318 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:02.582501 kubelet[1386]: E0209 19:00:02.582469 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:00:03.405133 kubelet[1386]: E0209 19:00:03.405086 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:03.584026 kubelet[1386]: E0209 19:00:03.583969 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:00:04.017814 kubelet[1386]: W0209 19:00:04.017775 1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eaa745e_4851_49ab_9877_e4320cb15b72.slice/cri-containerd-1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f.scope WatchSource:0}: task 1eedbc0426963dfdd58caf23010017d586d2f0e3b9207cbc80462d3f235a5a0f not found: not found Feb 9 19:00:04.405679 kubelet[1386]: E0209 19:00:04.405556 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:04.432214 kubelet[1386]: E0209 19:00:04.432194 1386 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:00:05.406385 kubelet[1386]: E0209 19:00:05.406320 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:05.892368 systemd[1]: run-containerd-runc-k8s.io-aa0e2c5dd046ca41c952eb110821f9bee946e99cafcdf34e1c1b9bf59d0fbe57-runc.eptPGF.mount: Deactivated successfully. Feb 9 19:00:06.406709 kubelet[1386]: E0209 19:00:06.406673 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:07.123687 kubelet[1386]: W0209 19:00:07.123640 1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eaa745e_4851_49ab_9877_e4320cb15b72.slice/cri-containerd-819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e.scope WatchSource:0}: task 819262fa25a4ff84fd6e4470727d91e5d09cd6287d3b52e22dac6b0e87a4af5e not found: not found Feb 9 19:00:07.407858 kubelet[1386]: E0209 19:00:07.407703 1386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"