Feb 12 20:23:35.782318 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:23:35.782337 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:23:35.782345 kernel: BIOS-provided physical RAM map: Feb 12 20:23:35.782350 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:23:35.782356 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:23:35.782361 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:23:35.782367 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:23:35.782373 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:23:35.782379 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:23:35.782385 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:23:35.782391 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:23:35.782396 kernel: NX (Execute Disable) protection: active Feb 12 20:23:35.782401 kernel: SMBIOS 2.8 present. Feb 12 20:23:35.782407 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:23:35.782415 kernel: Hypervisor detected: KVM Feb 12 20:23:35.782422 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:23:35.782427 kernel: kvm-clock: cpu 0, msr 70faa001, primary cpu clock Feb 12 20:23:35.782433 kernel: kvm-clock: using sched offset of 2149502773 cycles Feb 12 20:23:35.782440 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:23:35.782446 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:23:35.782452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:23:35.782458 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:23:35.782464 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:23:35.782472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:23:35.782478 kernel: Using GB pages for direct mapping Feb 12 20:23:35.782484 kernel: ACPI: Early table checksum verification disabled Feb 12 20:23:35.782490 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:23:35.782496 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782502 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782508 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782514 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:23:35.782520 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782527 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782533 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:23:35.782540 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:23:35.782547 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:23:35.782554 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:23:35.782561 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:23:35.782568 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:23:35.782575 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:23:35.782584 kernel: No NUMA configuration found Feb 12 20:23:35.782591 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:23:35.782597 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:23:35.782603 kernel: Zone ranges: Feb 12 20:23:35.782610 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:23:35.782616 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:23:35.782624 kernel: Normal empty Feb 12 20:23:35.782630 kernel: Movable zone start for each node Feb 12 20:23:35.782637 kernel: Early memory node ranges Feb 12 20:23:35.782643 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:23:35.782649 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:23:35.782656 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:23:35.782662 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:23:35.782669 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:23:35.782675 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:23:35.782683 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:23:35.782689 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:23:35.782695 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:23:35.782702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:23:35.782708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:23:35.782715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:23:35.782721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:23:35.782728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:23:35.782734 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:23:35.782741 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:23:35.782748 kernel: TSC deadline timer available Feb 12 20:23:35.782754 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:23:35.782760 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:23:35.782767 kernel: kvm-guest: setup PV sched yield Feb 12 20:23:35.782773 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:23:35.782779 kernel: Booting paravirtualized kernel on KVM Feb 12 20:23:35.782792 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:23:35.782799 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:23:35.782805 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:23:35.782813 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:23:35.782819 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:23:35.782825 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:23:35.782831 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:23:35.782838 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:23:35.782844 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:23:35.782851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:23:35.782857 kernel: Policy zone: DMA32 Feb 12 20:23:35.782865 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:23:35.782873 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:23:35.782879 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:23:35.782886 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:23:35.782892 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:23:35.782899 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:23:35.782906 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:23:35.782912 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:23:35.782919 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:23:35.782926 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:23:35.782933 kernel: rcu: RCU event tracing is enabled. Feb 12 20:23:35.782940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:23:35.782946 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:23:35.782953 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:23:35.782959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:23:35.782966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:23:35.782972 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:23:35.782979 kernel: random: crng init done Feb 12 20:23:35.782986 kernel: Console: colour VGA+ 80x25 Feb 12 20:23:35.782992 kernel: printk: console [ttyS0] enabled Feb 12 20:23:35.782999 kernel: ACPI: Core revision 20210730 Feb 12 20:23:35.783005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:23:35.783012 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:23:35.783018 kernel: x2apic enabled Feb 12 20:23:35.783025 kernel: Switched APIC routing to physical x2apic. Feb 12 20:23:35.783031 kernel: kvm-guest: setup PV IPIs Feb 12 20:23:35.783037 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:23:35.783046 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:23:35.783052 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:23:35.783059 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:23:35.783065 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:23:35.783072 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:23:35.783078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:23:35.783085 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:23:35.783091 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:23:35.783098 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:23:35.783109 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:23:35.783116 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:23:35.783123 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:23:35.783131 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:23:35.783137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:23:35.783144 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:23:35.783151 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:23:35.783158 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:23:35.783165 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:23:35.783173 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:23:35.783180 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:23:35.783187 kernel: LSM: Security Framework initializing Feb 12 20:23:35.783193 kernel: SELinux: Initializing. Feb 12 20:23:35.783200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:23:35.783207 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:23:35.783214 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:23:35.783222 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:23:35.783229 kernel: ... version: 0 Feb 12 20:23:35.783236 kernel: ... bit width: 48 Feb 12 20:23:35.783242 kernel: ... generic registers: 6 Feb 12 20:23:35.783249 kernel: ... value mask: 0000ffffffffffff Feb 12 20:23:35.783256 kernel: ... max period: 00007fffffffffff Feb 12 20:23:35.783271 kernel: ... fixed-purpose events: 0 Feb 12 20:23:35.783278 kernel: ... event mask: 000000000000003f Feb 12 20:23:35.783285 kernel: signal: max sigframe size: 1776 Feb 12 20:23:35.783292 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:23:35.783300 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:23:35.783307 kernel: x86: Booting SMP configuration: Feb 12 20:23:35.783313 kernel: .... node #0, CPUs: #1 Feb 12 20:23:35.783320 kernel: kvm-clock: cpu 1, msr 70faa041, secondary cpu clock Feb 12 20:23:35.783327 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:23:35.783334 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:23:35.783341 kernel: #2 Feb 12 20:23:35.783347 kernel: kvm-clock: cpu 2, msr 70faa081, secondary cpu clock Feb 12 20:23:35.783354 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:23:35.783362 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:23:35.783369 kernel: #3 Feb 12 20:23:35.783375 kernel: kvm-clock: cpu 3, msr 70faa0c1, secondary cpu clock Feb 12 20:23:35.783382 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:23:35.783389 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:23:35.783396 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:23:35.783402 kernel: smpboot: Max logical packages: 1 Feb 12 20:23:35.783409 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:23:35.783416 kernel: devtmpfs: initialized Feb 12 20:23:35.783424 kernel: x86/mm: Memory block size: 128MB Feb 12 20:23:35.783430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:23:35.783437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:23:35.783444 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:23:35.783451 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:23:35.783458 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:23:35.783464 kernel: audit: type=2000 audit(1707769416.204:1): state=initialized audit_enabled=0 res=1 Feb 12 20:23:35.783471 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:23:35.783478 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:23:35.783486 kernel: cpuidle: using governor menu Feb 12 20:23:35.783492 kernel: ACPI: bus type PCI registered Feb 12 20:23:35.783499 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:23:35.783506 kernel: dca service started, version 1.12.1 Feb 12 20:23:35.783513 kernel: PCI: Using configuration type 1 for base access Feb 12 20:23:35.783520 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:23:35.783526 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:23:35.783533 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:23:35.783540 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:23:35.783548 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:23:35.783555 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:23:35.783562 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:23:35.783568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:23:35.783575 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:23:35.783582 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:23:35.783589 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:23:35.783595 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:23:35.783602 kernel: ACPI: Interpreter enabled Feb 12 20:23:35.783610 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:23:35.783617 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:23:35.783624 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:23:35.783630 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:23:35.783637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:23:35.783745 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:23:35.783757 kernel: acpiphp: Slot [3] registered Feb 12 20:23:35.783764 kernel: acpiphp: Slot [4] registered Feb 12 20:23:35.783772 kernel: acpiphp: Slot [5] registered Feb 12 20:23:35.783779 kernel: acpiphp: Slot [6] registered Feb 12 20:23:35.783792 kernel: acpiphp: Slot [7] registered Feb 12 20:23:35.783799 kernel: acpiphp: Slot [8] registered Feb 12 20:23:35.783805 kernel: acpiphp: Slot [9] registered Feb 12 20:23:35.783812 kernel: acpiphp: Slot [10] registered Feb 12 20:23:35.783819 kernel: acpiphp: Slot [11] registered Feb 12 20:23:35.783826 kernel: acpiphp: Slot [12] registered Feb 12 20:23:35.783832 kernel: acpiphp: Slot [13] registered Feb 12 20:23:35.783839 kernel: acpiphp: Slot [14] registered Feb 12 20:23:35.783848 kernel: acpiphp: Slot [15] registered Feb 12 20:23:35.783854 kernel: acpiphp: Slot [16] registered Feb 12 20:23:35.783861 kernel: acpiphp: Slot [17] registered Feb 12 20:23:35.783868 kernel: acpiphp: Slot [18] registered Feb 12 20:23:35.783874 kernel: acpiphp: Slot [19] registered Feb 12 20:23:35.783881 kernel: acpiphp: Slot [20] registered Feb 12 20:23:35.783888 kernel: acpiphp: Slot [21] registered Feb 12 20:23:35.783894 kernel: acpiphp: Slot [22] registered Feb 12 20:23:35.783901 kernel: acpiphp: Slot [23] registered Feb 12 20:23:35.783909 kernel: acpiphp: Slot [24] registered Feb 12 20:23:35.783916 kernel: acpiphp: Slot [25] registered Feb 12 20:23:35.783922 kernel: acpiphp: Slot [26] registered Feb 12 20:23:35.783929 kernel: acpiphp: Slot [27] registered Feb 12 20:23:35.783936 kernel: acpiphp: Slot [28] registered Feb 12 20:23:35.783942 kernel: acpiphp: Slot [29] registered Feb 12 20:23:35.783949 kernel: acpiphp: Slot [30] registered Feb 12 20:23:35.783956 kernel: acpiphp: Slot [31] registered Feb 12 20:23:35.783962 kernel: PCI host bridge to bus 0000:00 Feb 12 20:23:35.784037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:23:35.784104 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:23:35.784164 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:23:35.784223 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:23:35.784344 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:23:35.784407 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:23:35.784485 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:23:35.784570 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:23:35.784649 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:23:35.784717 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:23:35.784791 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:23:35.784860 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:23:35.784927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:23:35.784993 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:23:35.785069 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:23:35.785136 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:23:35.785202 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:23:35.785286 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:23:35.785356 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:23:35.785423 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:23:35.785492 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:23:35.785558 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:23:35.785631 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:23:35.785700 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:23:35.785771 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:23:35.785852 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:23:35.785927 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:23:35.785999 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:23:35.786068 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:23:35.786136 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:23:35.786210 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:23:35.786307 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:23:35.786378 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:23:35.786446 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:23:35.786518 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:23:35.786528 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:23:35.786535 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:23:35.786542 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:23:35.786549 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:23:35.786556 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:23:35.786562 kernel: iommu: Default domain type: Translated Feb 12 20:23:35.786569 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:23:35.786635 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:23:35.786705 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:23:35.786772 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:23:35.786781 kernel: vgaarb: loaded Feb 12 20:23:35.786796 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:23:35.786805 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:23:35.786812 kernel: PTP clock support registered Feb 12 20:23:35.786819 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:23:35.786826 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:23:35.786835 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:23:35.786841 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:23:35.786848 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:23:35.786855 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:23:35.786862 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:23:35.786869 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:23:35.786875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:23:35.786882 kernel: pnp: PnP ACPI init Feb 12 20:23:35.786960 kernel: pnp 00:02: [dma 2] Feb 12 20:23:35.786972 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:23:35.786979 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:23:35.786986 kernel: NET: Registered PF_INET protocol family Feb 12 20:23:35.786993 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:23:35.787000 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:23:35.787007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:23:35.787014 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:23:35.787021 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:23:35.787029 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:23:35.787036 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:23:35.787043 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:23:35.787049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:23:35.787056 kernel: NET: Registered PF_XDP protocol family Feb 12 20:23:35.787117 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:23:35.787177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:23:35.787237 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:23:35.787308 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:23:35.787371 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:23:35.787440 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:23:35.787508 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:23:35.787581 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:23:35.787590 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:23:35.787597 kernel: Initialise system trusted keyrings Feb 12 20:23:35.787604 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:23:35.787610 kernel: Key type asymmetric registered Feb 12 20:23:35.787619 kernel: Asymmetric key parser 'x509' registered Feb 12 20:23:35.787626 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:23:35.787633 kernel: io scheduler mq-deadline registered Feb 12 20:23:35.787640 kernel: io scheduler kyber registered Feb 12 20:23:35.787646 kernel: io scheduler bfq registered Feb 12 20:23:35.787653 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:23:35.787661 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:23:35.787668 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:23:35.787674 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:23:35.787682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:23:35.787689 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:23:35.787696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:23:35.787703 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:23:35.787710 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:23:35.787779 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:23:35.787796 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:23:35.787858 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:23:35.787924 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:23:35 UTC (1707769415) Feb 12 20:23:35.787985 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:23:35.787994 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:23:35.788001 kernel: Segment Routing with IPv6 Feb 12 20:23:35.788008 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:23:35.788015 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:23:35.788022 kernel: Key type dns_resolver registered Feb 12 20:23:35.788028 kernel: IPI shorthand broadcast: enabled Feb 12 20:23:35.788035 kernel: sched_clock: Marking stable (338151093, 70641074)->(431265711, -22473544) Feb 12 20:23:35.788044 kernel: registered taskstats version 1 Feb 12 20:23:35.788051 kernel: Loading compiled-in X.509 certificates Feb 12 20:23:35.788058 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:23:35.788065 kernel: Key type .fscrypt registered Feb 12 20:23:35.788071 kernel: Key type fscrypt-provisioning registered Feb 12 20:23:35.788078 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:23:35.788085 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:23:35.788092 kernel: ima: No architecture policies found Feb 12 20:23:35.788099 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:23:35.788107 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:23:35.788113 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:23:35.788120 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:23:35.788127 kernel: Run /init as init process Feb 12 20:23:35.788134 kernel: with arguments: Feb 12 20:23:35.788141 kernel: /init Feb 12 20:23:35.788147 kernel: with environment: Feb 12 20:23:35.788163 kernel: HOME=/ Feb 12 20:23:35.788170 kernel: TERM=linux Feb 12 20:23:35.788178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:23:35.788187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:23:35.788196 systemd[1]: Detected virtualization kvm. Feb 12 20:23:35.788204 systemd[1]: Detected architecture x86-64. Feb 12 20:23:35.788211 systemd[1]: Running in initrd. Feb 12 20:23:35.788219 systemd[1]: No hostname configured, using default hostname. Feb 12 20:23:35.788226 systemd[1]: Hostname set to . Feb 12 20:23:35.788235 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:23:35.788242 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:23:35.788250 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:23:35.788257 systemd[1]: Reached target cryptsetup.target. Feb 12 20:23:35.788274 systemd[1]: Reached target paths.target. Feb 12 20:23:35.788281 systemd[1]: Reached target slices.target. Feb 12 20:23:35.788289 systemd[1]: Reached target swap.target. Feb 12 20:23:35.788296 systemd[1]: Reached target timers.target. Feb 12 20:23:35.788306 systemd[1]: Listening on iscsid.socket. Feb 12 20:23:35.788313 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:23:35.788321 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:23:35.788328 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:23:35.788336 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:23:35.788343 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:23:35.788350 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:23:35.788358 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:23:35.788367 systemd[1]: Reached target sockets.target. Feb 12 20:23:35.788374 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:23:35.788382 systemd[1]: Finished network-cleanup.service. Feb 12 20:23:35.788389 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:23:35.788397 systemd[1]: Starting systemd-journald.service... Feb 12 20:23:35.788404 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:23:35.788413 systemd[1]: Starting systemd-resolved.service... Feb 12 20:23:35.788421 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:23:35.788428 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:23:35.788436 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:23:35.788443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:23:35.788453 systemd-journald[197]: Journal started Feb 12 20:23:35.788490 systemd-journald[197]: Runtime Journal (/run/log/journal/fb79f19c9dfd447e915d094e0f81e5a4) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:23:35.779125 systemd-modules-load[198]: Inserted module 'overlay' Feb 12 20:23:35.800344 systemd[1]: Started systemd-journald.service. Feb 12 20:23:35.802231 kernel: audit: type=1130 audit(1707769415.800:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.802246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:23:35.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.800476 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:23:35.807657 kernel: audit: type=1130 audit(1707769415.803:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.807672 kernel: Bridge firewalling registered Feb 12 20:23:35.807681 kernel: audit: type=1130 audit(1707769415.807:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.801748 systemd-resolved[199]: Positive Trust Anchors: Feb 12 20:23:35.813477 kernel: audit: type=1130 audit(1707769415.809:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.801755 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:23:35.801782 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:23:35.803922 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 12 20:23:35.805292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:23:35.807053 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 12 20:23:35.807708 systemd[1]: Started systemd-resolved.service. Feb 12 20:23:35.810007 systemd[1]: Reached target nss-lookup.target. Feb 12 20:23:35.812779 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:23:35.821688 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:23:35.825083 kernel: audit: type=1130 audit(1707769415.821:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.824316 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:23:35.827286 kernel: SCSI subsystem initialized Feb 12 20:23:35.830806 dracut-cmdline[214]: dracut-dracut-053 Feb 12 20:23:35.832176 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:23:35.837634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:23:35.837656 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:23:35.838526 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:23:35.841103 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 12 20:23:35.841703 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:23:35.844711 kernel: audit: type=1130 audit(1707769415.841:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.842511 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:23:35.849546 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:23:35.852502 kernel: audit: type=1130 audit(1707769415.849:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.881284 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:23:35.891288 kernel: iscsi: registered transport (tcp) Feb 12 20:23:35.909546 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:23:35.909572 kernel: QLogic iSCSI HBA Driver Feb 12 20:23:35.936813 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:23:35.939731 kernel: audit: type=1130 audit(1707769415.936:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.939757 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:23:35.983283 kernel: raid6: avx2x4 gen() 30937 MB/s Feb 12 20:23:36.000279 kernel: raid6: avx2x4 xor() 8147 MB/s Feb 12 20:23:36.017275 kernel: raid6: avx2x2 gen() 32742 MB/s Feb 12 20:23:36.034278 kernel: raid6: avx2x2 xor() 19354 MB/s Feb 12 20:23:36.051276 kernel: raid6: avx2x1 gen() 26712 MB/s Feb 12 20:23:36.068276 kernel: raid6: avx2x1 xor() 15388 MB/s Feb 12 20:23:36.085277 kernel: raid6: sse2x4 gen() 14881 MB/s Feb 12 20:23:36.102281 kernel: raid6: sse2x4 xor() 7352 MB/s Feb 12 20:23:36.119282 kernel: raid6: sse2x2 gen() 16501 MB/s Feb 12 20:23:36.136276 kernel: raid6: sse2x2 xor() 9864 MB/s Feb 12 20:23:36.153278 kernel: raid6: sse2x1 gen() 12597 MB/s Feb 12 20:23:36.170326 kernel: raid6: sse2x1 xor() 7839 MB/s Feb 12 20:23:36.170343 kernel: raid6: using algorithm avx2x2 gen() 32742 MB/s Feb 12 20:23:36.170357 kernel: raid6: .... xor() 19354 MB/s, rmw enabled Feb 12 20:23:36.171281 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:23:36.182286 kernel: xor: automatically using best checksumming function avx Feb 12 20:23:36.270299 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:23:36.278254 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:23:36.281306 kernel: audit: type=1130 audit(1707769416.278:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:36.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:36.280000 audit: BPF prog-id=7 op=LOAD Feb 12 20:23:36.281000 audit: BPF prog-id=8 op=LOAD Feb 12 20:23:36.281616 systemd[1]: Starting systemd-udevd.service... Feb 12 20:23:36.292307 systemd-udevd[398]: Using default interface naming scheme 'v252'. Feb 12 20:23:36.296068 systemd[1]: Started systemd-udevd.service. Feb 12 20:23:36.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:36.297310 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:23:36.305836 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 12 20:23:36.328534 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:23:36.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:36.330404 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:23:36.361038 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:23:36.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:36.390290 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:23:36.401351 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:23:36.401372 kernel: AES CTR mode by8 optimization enabled Feb 12 20:23:36.407204 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:23:36.407349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:23:36.408666 kernel: GPT:9289727 != 19775487 Feb 12 20:23:36.408685 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:23:36.408695 kernel: GPT:9289727 != 19775487 Feb 12 20:23:36.409531 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:23:36.409543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:23:36.428293 kernel: libata version 3.00 loaded. Feb 12 20:23:36.428562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:23:36.446653 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Feb 12 20:23:36.446679 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:23:36.446838 kernel: scsi host0: ata_piix Feb 12 20:23:36.446956 kernel: scsi host1: ata_piix Feb 12 20:23:36.447083 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:23:36.447098 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:23:36.448065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:23:36.449417 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:23:36.458603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:23:36.462500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:23:36.464394 systemd[1]: Starting disk-uuid.service... Feb 12 20:23:36.474331 disk-uuid[523]: Primary Header is updated. Feb 12 20:23:36.474331 disk-uuid[523]: Secondary Entries is updated. Feb 12 20:23:36.474331 disk-uuid[523]: Secondary Header is updated. Feb 12 20:23:36.478295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:23:36.481299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:23:36.484289 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:23:36.592309 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:23:36.592387 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:23:36.623293 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:23:36.623500 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:23:36.640286 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:23:37.482296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:23:37.482507 disk-uuid[524]: The operation has completed successfully. Feb 12 20:23:37.507640 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:23:37.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.507784 systemd[1]: Finished disk-uuid.service. Feb 12 20:23:37.522001 systemd[1]: Starting verity-setup.service... Feb 12 20:23:37.538300 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:23:37.563279 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:23:37.565408 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:23:37.568951 systemd[1]: Finished verity-setup.service. Feb 12 20:23:37.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.637286 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:23:37.637311 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:23:37.637955 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:23:37.640130 systemd[1]: Starting ignition-setup.service... Feb 12 20:23:37.642348 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:23:37.653618 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:23:37.653676 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:23:37.653686 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:23:37.662661 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:23:37.671769 systemd[1]: Finished ignition-setup.service. Feb 12 20:23:37.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.672995 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:23:37.716504 ignition[638]: Ignition 2.14.0 Feb 12 20:23:37.716517 ignition[638]: Stage: fetch-offline Feb 12 20:23:37.716601 ignition[638]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:37.716611 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:37.716856 ignition[638]: parsed url from cmdline: "" Feb 12 20:23:37.716860 ignition[638]: no config URL provided Feb 12 20:23:37.716866 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:23:37.716872 ignition[638]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:23:37.716889 ignition[638]: op(1): [started] loading QEMU firmware config module Feb 12 20:23:37.716894 ignition[638]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:23:37.719763 ignition[638]: op(1): [finished] loading QEMU firmware config module Feb 12 20:23:37.732037 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:23:37.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.733000 audit: BPF prog-id=9 op=LOAD Feb 12 20:23:37.734539 systemd[1]: Starting systemd-networkd.service... Feb 12 20:23:37.782855 ignition[638]: parsing config with SHA512: c676d7aacd7f9289778b0a16aaf23f770d4d54290c0c2623300c544cacde736a339d4f0d0e9d6e30b9a1e2b6ad37c9d47a5163e5e0658d5a078d1a1791310359 Feb 12 20:23:37.812103 systemd-networkd[717]: lo: Link UP Feb 12 20:23:37.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.812117 systemd-networkd[717]: lo: Gained carrier Feb 12 20:23:37.813020 systemd-networkd[717]: Enumeration completed Feb 12 20:23:37.813155 systemd[1]: Started systemd-networkd.service. Feb 12 20:23:37.813546 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:23:37.815023 systemd-networkd[717]: eth0: Link UP Feb 12 20:23:37.815027 systemd-networkd[717]: eth0: Gained carrier Feb 12 20:23:37.815583 systemd[1]: Reached target network.target. Feb 12 20:23:37.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.818288 systemd[1]: Starting iscsiuio.service... Feb 12 20:23:37.823728 systemd[1]: Started iscsiuio.service. Feb 12 20:23:37.825985 systemd[1]: Starting iscsid.service... Feb 12 20:23:37.829916 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:23:37.829916 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:23:37.829916 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:23:37.829916 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:23:37.829916 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:23:37.829916 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:23:37.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.831565 systemd[1]: Started iscsid.service. Feb 12 20:23:37.834449 ignition[638]: fetch-offline: fetch-offline passed Feb 12 20:23:37.833498 unknown[638]: fetched base config from "system" Feb 12 20:23:37.834517 ignition[638]: Ignition finished successfully Feb 12 20:23:37.833510 unknown[638]: fetched user config from "qemu" Feb 12 20:23:37.836378 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:23:37.837078 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:23:37.839549 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:23:37.840548 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:23:37.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.841256 systemd[1]: Starting ignition-kargs.service... Feb 12 20:23:37.854425 ignition[724]: Ignition 2.14.0 Feb 12 20:23:37.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.851711 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:23:37.854433 ignition[724]: Stage: kargs Feb 12 20:23:37.852777 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:23:37.854558 ignition[724]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:37.854904 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:23:37.854570 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:37.855667 systemd[1]: Reached target remote-fs.target. Feb 12 20:23:37.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.856192 ignition[724]: kargs: kargs passed Feb 12 20:23:37.857230 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:23:37.856249 ignition[724]: Ignition finished successfully Feb 12 20:23:37.858095 systemd[1]: Finished ignition-kargs.service. Feb 12 20:23:37.860007 systemd[1]: Starting ignition-disks.service... Feb 12 20:23:37.869709 ignition[739]: Ignition 2.14.0 Feb 12 20:23:37.865543 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:23:37.869718 ignition[739]: Stage: disks Feb 12 20:23:37.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.872466 systemd[1]: Finished ignition-disks.service. Feb 12 20:23:37.869858 ignition[739]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:37.873630 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:23:37.869871 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:37.874893 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:23:37.871308 ignition[739]: disks: disks passed Feb 12 20:23:37.875720 systemd[1]: Reached target local-fs.target. Feb 12 20:23:37.871364 ignition[739]: Ignition finished successfully Feb 12 20:23:37.877180 systemd[1]: Reached target sysinit.target. Feb 12 20:23:37.878807 systemd[1]: Reached target basic.target. Feb 12 20:23:37.881487 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:23:37.894593 systemd-fsck[752]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:23:37.900473 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:23:37.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.901609 systemd[1]: Mounting sysroot.mount... Feb 12 20:23:37.911300 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:23:37.911408 systemd[1]: Mounted sysroot.mount. Feb 12 20:23:37.912040 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:23:37.913503 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:23:37.915084 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:23:37.915130 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:23:37.915164 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:23:37.917598 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:23:37.919419 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:23:37.926710 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:23:37.931243 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:23:37.935247 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:23:37.939328 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:23:37.969362 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:23:37.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.970937 systemd[1]: Starting ignition-mount.service... Feb 12 20:23:37.972216 systemd[1]: Starting sysroot-boot.service... Feb 12 20:23:37.980078 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:23:37.989927 ignition[805]: INFO : Ignition 2.14.0 Feb 12 20:23:37.989927 ignition[805]: INFO : Stage: mount Feb 12 20:23:37.992215 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:37.992215 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:37.992215 ignition[805]: INFO : mount: mount passed Feb 12 20:23:37.992215 ignition[805]: INFO : Ignition finished successfully Feb 12 20:23:37.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:37.992018 systemd[1]: Finished ignition-mount.service. Feb 12 20:23:37.999458 systemd[1]: Finished sysroot-boot.service. Feb 12 20:23:37.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:38.577087 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:23:38.585282 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 12 20:23:38.586816 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:23:38.586829 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:23:38.586840 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:23:38.592104 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:23:38.593568 systemd[1]: Starting ignition-files.service... Feb 12 20:23:38.607412 ignition[834]: INFO : Ignition 2.14.0 Feb 12 20:23:38.607412 ignition[834]: INFO : Stage: files Feb 12 20:23:38.609044 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:38.609044 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:38.609044 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:23:38.612319 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:23:38.612319 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:23:38.614736 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:23:38.616118 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:23:38.617700 unknown[834]: wrote ssh authorized keys file for user: core Feb 12 20:23:38.618705 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:23:38.620048 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:23:38.620048 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:23:38.647898 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:23:38.699073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:23:38.700625 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:23:38.700625 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:23:39.158533 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:23:39.198444 systemd-networkd[717]: eth0: Gained IPv6LL Feb 12 20:23:39.235079 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:23:39.237169 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:23:39.237169 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:23:39.237169 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:23:39.683982 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:23:39.928179 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:23:39.928179 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:23:39.932131 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:23:39.932131 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:23:39.934857 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:23:39.934857 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:23:40.012234 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:23:40.217750 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 20:23:40.217750 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:23:40.220897 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:23:40.220897 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:23:40.264583 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:23:40.754575 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:23:40.757618 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:23:40.757618 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:40.757618 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:23:40.802748 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:23:41.017986 ignition[834]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:23:41.017986 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:41.021576 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:23:41.021576 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:23:41.420975 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:23:41.485180 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:23:41.495435 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:23:41.495435 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:23:41.495435 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:41.495435 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 12 20:23:41.495435 ignition[834]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(19): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:23:41.518009 ignition[834]: INFO : files: op(19): op(1a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:23:41.526568 ignition[834]: INFO : files: op(19): op(1a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: op(19): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:41.527673 ignition[834]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:41.527673 ignition[834]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:41.527673 ignition[834]: INFO : files: files passed Feb 12 20:23:41.527673 ignition[834]: INFO : Ignition finished successfully Feb 12 20:23:41.536999 systemd[1]: Finished ignition-files.service. Feb 12 20:23:41.540760 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 20:23:41.540782 kernel: audit: type=1130 audit(1707769421.537:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.540751 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:23:41.541057 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:23:41.541616 systemd[1]: Starting ignition-quench.service... Feb 12 20:23:41.548966 kernel: audit: type=1130 audit(1707769421.544:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.548982 kernel: audit: type=1131 audit(1707769421.544:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.543563 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:23:41.543626 systemd[1]: Finished ignition-quench.service. Feb 12 20:23:41.553910 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:23:41.556389 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:23:41.558020 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:23:41.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.559603 systemd[1]: Reached target ignition-complete.target. Feb 12 20:23:41.562931 kernel: audit: type=1130 audit(1707769421.559:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.562950 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:23:41.575957 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:23:41.576088 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:23:41.581595 kernel: audit: type=1130 audit(1707769421.577:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.581612 kernel: audit: type=1131 audit(1707769421.577:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.577401 systemd[1]: Reached target initrd-fs.target. Feb 12 20:23:41.581953 systemd[1]: Reached target initrd.target. Feb 12 20:23:41.582993 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:23:41.583654 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:23:41.592716 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:23:41.595833 kernel: audit: type=1130 audit(1707769421.592:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.595871 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:23:41.605227 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:23:41.605588 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:23:41.606622 systemd[1]: Stopped target timers.target. Feb 12 20:23:41.607846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:23:41.611834 kernel: audit: type=1131 audit(1707769421.608:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.607929 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:23:41.608870 systemd[1]: Stopped target initrd.target. Feb 12 20:23:41.612189 systemd[1]: Stopped target basic.target. Feb 12 20:23:41.613172 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:23:41.614050 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:23:41.615178 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:23:41.616515 systemd[1]: Stopped target remote-fs.target. Feb 12 20:23:41.617605 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:23:41.618847 systemd[1]: Stopped target sysinit.target. Feb 12 20:23:41.620021 systemd[1]: Stopped target local-fs.target. Feb 12 20:23:41.621080 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:23:41.622060 systemd[1]: Stopped target swap.target. Feb 12 20:23:41.626856 kernel: audit: type=1131 audit(1707769421.623:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.623176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:23:41.623253 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:23:41.630800 kernel: audit: type=1131 audit(1707769421.627:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.624289 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:23:41.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.627129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:23:41.627227 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:23:41.628305 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:23:41.628382 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:23:41.631204 systemd[1]: Stopped target paths.target. Feb 12 20:23:41.632130 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:23:41.635351 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:23:41.635697 systemd[1]: Stopped target slices.target. Feb 12 20:23:41.637733 systemd[1]: Stopped target sockets.target. Feb 12 20:23:41.638133 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:23:41.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.638253 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:23:41.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.639136 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:23:41.639213 systemd[1]: Stopped ignition-files.service. Feb 12 20:23:41.641468 systemd[1]: Stopping ignition-mount.service... Feb 12 20:23:41.642558 systemd[1]: Stopping iscsid.service... Feb 12 20:23:41.644192 iscsid[722]: iscsid shutting down. Feb 12 20:23:41.643452 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:23:41.645308 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:23:41.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.645469 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:23:41.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.645851 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:23:41.649088 ignition[874]: INFO : Ignition 2.14.0 Feb 12 20:23:41.649088 ignition[874]: INFO : Stage: umount Feb 12 20:23:41.649088 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:23:41.649088 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:23:41.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.645964 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:23:41.652500 ignition[874]: INFO : umount: umount passed Feb 12 20:23:41.652500 ignition[874]: INFO : Ignition finished successfully Feb 12 20:23:41.650428 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:23:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.650502 systemd[1]: Stopped ignition-mount.service. Feb 12 20:23:41.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.650774 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:23:41.650804 systemd[1]: Stopped ignition-disks.service. Feb 12 20:23:41.652621 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:23:41.652667 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:23:41.652851 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:23:41.652879 systemd[1]: Stopped ignition-setup.service. Feb 12 20:23:41.653288 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:23:41.653354 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:23:41.664672 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:23:41.666019 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:23:41.666100 systemd[1]: Stopped iscsid.service. Feb 12 20:23:41.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.666822 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:23:41.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.666875 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:23:41.668103 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:23:41.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.668129 systemd[1]: Closed iscsid.socket. Feb 12 20:23:41.669096 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:23:41.669128 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:23:41.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.669483 systemd[1]: Stopping iscsiuio.service... Feb 12 20:23:41.672181 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:23:41.672277 systemd[1]: Stopped iscsiuio.service. Feb 12 20:23:41.673048 systemd[1]: Stopped target network.target. Feb 12 20:23:41.674215 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:23:41.674242 systemd[1]: Closed iscsiuio.socket. Feb 12 20:23:41.674874 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:23:41.676090 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:23:41.681322 systemd-networkd[717]: eth0: DHCPv6 lease lost Feb 12 20:23:41.682296 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:23:41.682373 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:23:41.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.684590 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:23:41.685296 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:23:41.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.686551 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:23:41.686579 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:23:41.687000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:23:41.688759 systemd[1]: Stopping network-cleanup.service... Feb 12 20:23:41.689843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:23:41.689879 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:23:41.691000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:23:41.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.692142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:23:41.692175 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:23:41.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.693920 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:23:41.693954 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:23:41.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.695835 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:23:41.698452 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:23:41.701054 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:23:41.701791 systemd[1]: Stopped network-cleanup.service. Feb 12 20:23:41.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.703043 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:23:41.703749 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:23:41.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.705233 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:23:41.705289 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:23:41.707456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:23:41.707497 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:23:41.709255 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:23:41.709301 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:23:41.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.711010 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:23:41.711041 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:23:41.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.712694 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:23:41.712730 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:23:41.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.715043 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:23:41.716226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:23:41.716284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:23:41.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.718319 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:23:41.718355 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:23:41.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.720154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:23:41.720186 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:23:41.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.722769 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:23:41.724086 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:23:41.724842 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:23:41.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.726105 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:23:41.727780 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:23:41.743387 systemd[1]: Switching root. Feb 12 20:23:41.761579 systemd-journald[197]: Journal stopped Feb 12 20:23:44.527381 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 12 20:23:44.527424 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:23:44.527436 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:23:44.527447 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:23:44.527456 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:23:44.527466 kernel: SELinux: policy capability open_perms=1 Feb 12 20:23:44.527475 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:23:44.527484 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:23:44.527496 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:23:44.527508 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:23:44.527520 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:23:44.527529 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:23:44.527541 systemd[1]: Successfully loaded SELinux policy in 35.336ms. Feb 12 20:23:44.527566 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.094ms. Feb 12 20:23:44.527577 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:23:44.527587 systemd[1]: Detected virtualization kvm. Feb 12 20:23:44.527597 systemd[1]: Detected architecture x86-64. Feb 12 20:23:44.527607 systemd[1]: Detected first boot. Feb 12 20:23:44.527617 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:23:44.527627 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:23:44.527638 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:23:44.527649 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:23:44.527662 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:23:44.527673 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:23:44.527686 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:23:44.527695 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:23:44.527705 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:23:44.527716 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:23:44.527727 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:23:44.527739 systemd[1]: Created slice system-getty.slice. Feb 12 20:23:44.527750 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:23:44.527760 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:23:44.527770 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:23:44.527781 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:23:44.527791 systemd[1]: Created slice user.slice. Feb 12 20:23:44.527802 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:23:44.527812 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:23:44.527823 systemd[1]: Set up automount boot.automount. Feb 12 20:23:44.527833 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:23:44.527843 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:23:44.527853 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:23:44.527863 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:23:44.527873 systemd[1]: Reached target integritysetup.target. Feb 12 20:23:44.527883 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:23:44.527895 systemd[1]: Reached target remote-fs.target. Feb 12 20:23:44.527904 systemd[1]: Reached target slices.target. Feb 12 20:23:44.527914 systemd[1]: Reached target swap.target. Feb 12 20:23:44.527924 systemd[1]: Reached target torcx.target. Feb 12 20:23:44.527949 systemd[1]: Reached target veritysetup.target. Feb 12 20:23:44.527961 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:23:44.527971 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:23:44.527980 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:23:44.527991 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:23:44.528001 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:23:44.528012 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:23:44.528022 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:23:44.528032 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:23:44.528051 systemd[1]: Mounting media.mount... Feb 12 20:23:44.528061 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:23:44.528071 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:23:44.528081 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:23:44.528091 systemd[1]: Mounting tmp.mount... Feb 12 20:23:44.528101 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:23:44.528113 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:23:44.528123 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:23:44.528133 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:23:44.528142 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:23:44.528152 systemd[1]: Starting modprobe@drm.service... Feb 12 20:23:44.528162 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:23:44.528173 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:23:44.528183 systemd[1]: Starting modprobe@loop.service... Feb 12 20:23:44.528193 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:23:44.528205 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:23:44.528215 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:23:44.528225 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:23:44.528235 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:23:44.528248 systemd[1]: Stopped systemd-journald.service. Feb 12 20:23:44.528258 kernel: loop: module loaded Feb 12 20:23:44.528280 systemd[1]: Starting systemd-journald.service... Feb 12 20:23:44.528290 kernel: fuse: init (API version 7.34) Feb 12 20:23:44.528300 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:23:44.528312 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:23:44.528322 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:23:44.528332 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:23:44.528342 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:23:44.528352 systemd[1]: Stopped verity-setup.service. Feb 12 20:23:44.528363 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:23:44.528373 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:23:44.528383 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:23:44.528393 systemd[1]: Mounted media.mount. Feb 12 20:23:44.528404 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:23:44.528414 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:23:44.528424 systemd[1]: Mounted tmp.mount. Feb 12 20:23:44.528434 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:23:44.528444 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:23:44.528455 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:23:44.528464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:23:44.528475 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:23:44.528487 systemd-journald[981]: Journal started Feb 12 20:23:44.528523 systemd-journald[981]: Runtime Journal (/run/log/journal/fb79f19c9dfd447e915d094e0f81e5a4) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:23:41.815000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:23:42.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:23:42.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:23:42.357000 audit: BPF prog-id=10 op=LOAD Feb 12 20:23:42.357000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:23:42.357000 audit: BPF prog-id=11 op=LOAD Feb 12 20:23:42.357000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:23:42.385000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:23:42.385000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558ac a1=c0000d8de0 a2=c0000e1ac0 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.385000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:23:42.386000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:23:42.386000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155985 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.386000 audit: CWD cwd="/" Feb 12 20:23:42.386000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:42.386000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:42.386000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:23:44.423000 audit: BPF prog-id=12 op=LOAD Feb 12 20:23:44.423000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:23:44.423000 audit: BPF prog-id=13 op=LOAD Feb 12 20:23:44.423000 audit: BPF prog-id=14 op=LOAD Feb 12 20:23:44.423000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:23:44.423000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:23:44.424000 audit: BPF prog-id=15 op=LOAD Feb 12 20:23:44.424000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:23:44.424000 audit: BPF prog-id=16 op=LOAD Feb 12 20:23:44.424000 audit: BPF prog-id=17 op=LOAD Feb 12 20:23:44.424000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:23:44.424000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:23:44.425000 audit: BPF prog-id=18 op=LOAD Feb 12 20:23:44.425000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:23:44.425000 audit: BPF prog-id=19 op=LOAD Feb 12 20:23:44.425000 audit: BPF prog-id=20 op=LOAD Feb 12 20:23:44.425000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:23:44.425000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:23:44.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.529812 systemd[1]: Started systemd-journald.service. Feb 12 20:23:44.441000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:23:44.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.502000 audit: BPF prog-id=21 op=LOAD Feb 12 20:23:44.502000 audit: BPF prog-id=22 op=LOAD Feb 12 20:23:44.502000 audit: BPF prog-id=23 op=LOAD Feb 12 20:23:44.502000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:23:44.502000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:23:44.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.525000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:23:44.525000 audit[981]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff9a41aa80 a2=4000 a3=7fff9a41ab1c items=0 ppid=1 pid=981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:44.525000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:23:44.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.422349 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:23:42.384080 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:23:44.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.422360 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:23:42.384288 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:23:44.425755 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:23:42.384305 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:23:44.530550 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:23:44.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:42.384330 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:23:44.530670 systemd[1]: Finished modprobe@drm.service. Feb 12 20:23:42.384339 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:23:44.531366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:23:42.384365 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:23:44.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.531479 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:23:42.384375 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:23:44.532212 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:23:42.384559 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:23:44.532471 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:23:42.384590 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:23:44.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.533273 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:23:42.384602 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:23:44.534007 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:23:42.384888 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:23:44.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.534132 systemd[1]: Finished modprobe@loop.service. Feb 12 20:23:42.384919 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:23:44.534933 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:23:42.384935 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:23:44.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:42.384949 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:23:44.535746 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:23:42.384963 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:23:42.384976 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:23:44.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.174075 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:23:44.536581 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:23:44.174331 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:23:44.174420 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:23:44.174577 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:23:44.174621 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:23:44.174672 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-02-12T20:23:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:23:44.537651 systemd[1]: Reached target network-pre.target. Feb 12 20:23:44.539037 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:23:44.540259 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:23:44.540865 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:23:44.541820 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:23:44.543257 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:23:44.544045 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:23:44.544792 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:23:44.545377 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:23:44.546131 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:23:44.547332 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:23:44.548429 systemd-journald[981]: Time spent on flushing to /var/log/journal/fb79f19c9dfd447e915d094e0f81e5a4 is 20.302ms for 1140 entries. Feb 12 20:23:44.548429 systemd-journald[981]: System Journal (/var/log/journal/fb79f19c9dfd447e915d094e0f81e5a4) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:23:44.573392 systemd-journald[981]: Received client request to flush runtime journal. Feb 12 20:23:44.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.550544 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:23:44.551208 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:23:44.557594 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:23:44.558449 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:23:44.567411 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:23:44.568884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:23:44.570721 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:23:44.571526 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:23:44.572875 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:23:44.576673 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:23:44.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.580569 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:23:44.582341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:23:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.957639 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:23:44.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.958000 audit: BPF prog-id=24 op=LOAD Feb 12 20:23:44.958000 audit: BPF prog-id=25 op=LOAD Feb 12 20:23:44.958000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:23:44.958000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:23:44.959392 systemd[1]: Starting systemd-udevd.service... Feb 12 20:23:44.973993 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Feb 12 20:23:44.984568 systemd[1]: Started systemd-udevd.service. Feb 12 20:23:44.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.986000 audit: BPF prog-id=26 op=LOAD Feb 12 20:23:44.987231 systemd[1]: Starting systemd-networkd.service... Feb 12 20:23:44.993189 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:23:44.992000 audit: BPF prog-id=27 op=LOAD Feb 12 20:23:44.992000 audit: BPF prog-id=28 op=LOAD Feb 12 20:23:44.992000 audit: BPF prog-id=29 op=LOAD Feb 12 20:23:45.011623 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:23:45.022317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:23:45.031941 systemd[1]: Started systemd-userdbd.service. Feb 12 20:23:45.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.070289 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:23:45.059000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:23:45.074204 systemd-networkd[1028]: lo: Link UP Feb 12 20:23:45.074458 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:23:45.074215 systemd-networkd[1028]: lo: Gained carrier Feb 12 20:23:45.074560 systemd-networkd[1028]: Enumeration completed Feb 12 20:23:45.074635 systemd[1]: Started systemd-networkd.service. Feb 12 20:23:45.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.075405 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:23:45.076417 systemd-networkd[1028]: eth0: Link UP Feb 12 20:23:45.076427 systemd-networkd[1028]: eth0: Gained carrier Feb 12 20:23:45.059000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f92aae9a60 a1=32194 a2=7f9755d0cbc5 a3=5 items=108 ppid=1017 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:45.059000 audit: CWD cwd="/" Feb 12 20:23:45.059000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=1 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=2 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=3 name=(null) inode=12775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=4 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=5 name=(null) inode=12776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=6 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=7 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=8 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=9 name=(null) inode=12778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=10 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=11 name=(null) inode=12779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=12 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=13 name=(null) inode=12780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=14 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=15 name=(null) inode=12781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=16 name=(null) inode=12777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=17 name=(null) inode=12782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=18 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=19 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=20 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=21 name=(null) inode=12784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=22 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=23 name=(null) inode=12785 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=24 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=25 name=(null) inode=12786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=26 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=27 name=(null) inode=12787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=28 name=(null) inode=12783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=29 name=(null) inode=12788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=30 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=31 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=32 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=33 name=(null) inode=12790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=34 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=35 name=(null) inode=12791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=36 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=37 name=(null) inode=12792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=38 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=39 name=(null) inode=12793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=40 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=41 name=(null) inode=12794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=42 name=(null) inode=12774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=43 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=44 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=45 name=(null) inode=12796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=46 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=47 name=(null) inode=12797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=48 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=49 name=(null) inode=12798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=50 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=51 name=(null) inode=12799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=52 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=53 name=(null) inode=12800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=55 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=56 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=57 name=(null) inode=12802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=58 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=59 name=(null) inode=12803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=60 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=61 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=62 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=63 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=64 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=65 name=(null) inode=12806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=66 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=67 name=(null) inode=12807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=68 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=69 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=70 name=(null) inode=12804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=71 name=(null) inode=12809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=72 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=73 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=74 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=75 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=76 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=77 name=(null) inode=12812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=78 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=79 name=(null) inode=12813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=80 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=81 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=82 name=(null) inode=12810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=83 name=(null) inode=12815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=84 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=85 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=86 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=87 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=88 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=89 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=90 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=91 name=(null) inode=12819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=92 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=93 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=94 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=95 name=(null) inode=12821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=96 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=97 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=98 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=99 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=100 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=101 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=102 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=103 name=(null) inode=12827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=104 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=105 name=(null) inode=12828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=106 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PATH item=107 name=(null) inode=12829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:23:45.059000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:23:45.091290 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:23:45.092287 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:23:45.102430 systemd-networkd[1028]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:23:45.105283 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:23:45.156362 kernel: kvm: Nested Virtualization enabled Feb 12 20:23:45.156410 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:23:45.157329 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:23:45.157361 kernel: SVM: Virtual GIF supported Feb 12 20:23:45.171299 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:23:45.190601 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:23:45.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.192322 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:23:45.199219 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:23:45.225932 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:23:45.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.226699 systemd[1]: Reached target cryptsetup.target. Feb 12 20:23:45.228110 systemd[1]: Starting lvm2-activation.service... Feb 12 20:23:45.230680 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:23:45.254939 systemd[1]: Finished lvm2-activation.service. Feb 12 20:23:45.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.255708 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:23:45.256339 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:23:45.256361 systemd[1]: Reached target local-fs.target. Feb 12 20:23:45.256929 systemd[1]: Reached target machines.target. Feb 12 20:23:45.258517 systemd[1]: Starting ldconfig.service... Feb 12 20:23:45.259341 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:23:45.259379 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:23:45.260149 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:23:45.261893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:23:45.264029 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:23:45.265244 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:23:45.265295 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:23:45.266292 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:23:45.269456 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Feb 12 20:23:45.270710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:23:45.275818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:23:45.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.279607 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:23:45.280084 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:23:45.281199 systemd-tmpfiles[1059]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:23:45.306732 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Feb 12 20:23:45.306732 systemd-fsck[1064]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:23:45.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.308011 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:23:45.310448 systemd[1]: Mounting boot.mount... Feb 12 20:23:45.559720 systemd[1]: Mounted boot.mount. Feb 12 20:23:45.571963 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:23:45.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.579115 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:23:45.636383 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:23:45.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.638329 systemd[1]: Starting audit-rules.service... Feb 12 20:23:45.639645 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:23:45.641310 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:23:45.642000 audit: BPF prog-id=30 op=LOAD Feb 12 20:23:45.643615 systemd[1]: Starting systemd-resolved.service... Feb 12 20:23:45.646420 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:23:45.645000 audit: BPF prog-id=31 op=LOAD Feb 12 20:23:45.648220 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:23:45.649988 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:23:45.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.651001 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:23:45.653000 audit[1076]: SYSTEM_BOOT pid=1076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.656129 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:23:45.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.747313 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:23:45.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.748335 systemd[1]: Reached target time-set.target. Feb 12 20:23:46.807069 systemd-timesyncd[1072]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:23:46.807109 systemd-timesyncd[1072]: Initial clock synchronization to Mon 2024-02-12 20:23:46.806996 UTC. Feb 12 20:23:46.808532 systemd-resolved[1070]: Positive Trust Anchors: Feb 12 20:23:46.808544 systemd-resolved[1070]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:23:46.808576 systemd-resolved[1070]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:23:46.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.810609 systemd[1]: Finished ldconfig.service. Feb 12 20:23:46.812436 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:23:46.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.814275 systemd[1]: Starting systemd-update-done.service... Feb 12 20:23:46.816713 systemd-resolved[1070]: Defaulting to hostname 'linux'. Feb 12 20:23:46.818188 systemd[1]: Started systemd-resolved.service. Feb 12 20:23:46.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.819109 systemd[1]: Finished systemd-update-done.service. Feb 12 20:23:46.819000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:23:46.819000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe156f0ab0 a2=420 a3=0 items=0 ppid=1067 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:46.819000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:23:46.819606 augenrules[1090]: No rules Feb 12 20:23:46.820010 systemd[1]: Finished audit-rules.service. Feb 12 20:23:46.820631 systemd[1]: Reached target network.target. Feb 12 20:23:46.821180 systemd[1]: Reached target nss-lookup.target. Feb 12 20:23:46.821751 systemd[1]: Reached target sysinit.target. Feb 12 20:23:46.822391 systemd[1]: Started motdgen.path. Feb 12 20:23:46.822901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:23:46.823750 systemd[1]: Started logrotate.timer. Feb 12 20:23:46.824395 systemd[1]: Started mdadm.timer. Feb 12 20:23:46.824884 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:23:46.825675 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:23:46.825708 systemd[1]: Reached target paths.target. Feb 12 20:23:46.826326 systemd[1]: Reached target timers.target. Feb 12 20:23:46.827271 systemd[1]: Listening on dbus.socket. Feb 12 20:23:46.828617 systemd[1]: Starting docker.socket... Feb 12 20:23:46.830592 systemd[1]: Listening on sshd.socket. Feb 12 20:23:46.831216 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:23:46.831508 systemd[1]: Listening on docker.socket. Feb 12 20:23:46.832081 systemd[1]: Reached target sockets.target. Feb 12 20:23:46.832630 systemd[1]: Reached target basic.target. Feb 12 20:23:46.833209 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:23:46.833230 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:23:46.833851 systemd[1]: Starting containerd.service... Feb 12 20:23:46.835110 systemd[1]: Starting dbus.service... Feb 12 20:23:46.836235 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:23:46.837530 systemd[1]: Starting extend-filesystems.service... Feb 12 20:23:46.838104 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:23:46.838836 systemd[1]: Starting motdgen.service... Feb 12 20:23:46.840025 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:23:46.841230 systemd[1]: Starting prepare-critools.service... Feb 12 20:23:46.842390 systemd[1]: Starting prepare-helm.service... Feb 12 20:23:46.843573 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:23:46.844837 systemd[1]: Starting sshd-keygen.service... Feb 12 20:23:46.847262 systemd[1]: Starting systemd-logind.service... Feb 12 20:23:46.847789 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:23:46.847825 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:23:46.848102 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:23:46.848553 systemd[1]: Starting update-engine.service... Feb 12 20:23:46.849759 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:23:46.868690 jq[1109]: true Feb 12 20:23:46.869669 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:23:46.869844 tar[1111]: ./ Feb 12 20:23:46.869844 tar[1111]: ./macvlan Feb 12 20:23:46.869819 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:23:46.871589 tar[1113]: linux-amd64/helm Feb 12 20:23:46.873430 jq[1099]: false Feb 12 20:23:46.874510 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:23:46.874645 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:23:46.877732 jq[1121]: true Feb 12 20:23:46.884225 tar[1112]: crictl Feb 12 20:23:46.891916 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:23:46.892083 systemd[1]: Finished motdgen.service. Feb 12 20:23:46.906274 env[1114]: time="2024-02-12T20:23:46.906219478Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:23:46.910571 tar[1111]: ./static Feb 12 20:23:46.929409 env[1114]: time="2024-02-12T20:23:46.929344173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:23:46.929556 env[1114]: time="2024-02-12T20:23:46.929529561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.930860 env[1114]: time="2024-02-12T20:23:46.930826474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:23:46.930860 env[1114]: time="2024-02-12T20:23:46.930855127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931060 env[1114]: time="2024-02-12T20:23:46.931031568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931060 env[1114]: time="2024-02-12T20:23:46.931052387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931161 env[1114]: time="2024-02-12T20:23:46.931063318Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:23:46.931161 env[1114]: time="2024-02-12T20:23:46.931072515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931161 env[1114]: time="2024-02-12T20:23:46.931139360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931364 env[1114]: time="2024-02-12T20:23:46.931336580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931475 env[1114]: time="2024-02-12T20:23:46.931448811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:23:46.931475 env[1114]: time="2024-02-12T20:23:46.931467806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:23:46.931554 env[1114]: time="2024-02-12T20:23:46.931509855Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:23:46.931554 env[1114]: time="2024-02-12T20:23:46.931519333Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:23:46.933019 tar[1111]: ./vlan Feb 12 20:23:46.950015 extend-filesystems[1100]: Found sr0 Feb 12 20:23:46.961282 dbus-daemon[1098]: [system] SELinux support is enabled Feb 12 20:23:46.961415 systemd[1]: Started dbus.service. Feb 12 20:23:46.963913 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:23:46.963940 systemd[1]: Reached target system-config.target. Feb 12 20:23:46.964666 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:23:46.964687 systemd[1]: Reached target user-config.target. Feb 12 20:23:46.964916 extend-filesystems[1100]: Found vda Feb 12 20:23:46.966513 extend-filesystems[1100]: Found vda1 Feb 12 20:23:46.966513 extend-filesystems[1100]: Found vda2 Feb 12 20:23:46.976591 extend-filesystems[1100]: Found vda3 Feb 12 20:23:46.976591 extend-filesystems[1100]: Found usr Feb 12 20:23:46.976591 extend-filesystems[1100]: Found vda4 Feb 12 20:23:46.976591 extend-filesystems[1100]: Found vda6 Feb 12 20:23:46.976591 extend-filesystems[1100]: Found vda7 Feb 12 20:23:46.976591 extend-filesystems[1100]: Found vda9 Feb 12 20:23:46.976591 extend-filesystems[1100]: Checking size of /dev/vda9 Feb 12 20:23:46.987953 tar[1111]: ./portmap Feb 12 20:23:46.966956 systemd-logind[1107]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:23:46.989166 update_engine[1108]: I0212 20:23:46.978608 1108 main.cc:92] Flatcar Update Engine starting Feb 12 20:23:46.989166 update_engine[1108]: I0212 20:23:46.980027 1108 update_check_scheduler.cc:74] Next update check in 8m53s Feb 12 20:23:46.966974 systemd-logind[1107]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:23:46.973029 systemd-logind[1107]: New seat seat0. Feb 12 20:23:46.980000 systemd[1]: Started update-engine.service. Feb 12 20:23:46.982672 systemd[1]: Started locksmithd.service. Feb 12 20:23:46.988267 systemd[1]: Started systemd-logind.service. Feb 12 20:23:47.003293 tar[1111]: ./host-local Feb 12 20:23:47.023779 extend-filesystems[1100]: Resized partition /dev/vda9 Feb 12 20:23:47.029413 tar[1111]: ./vrf Feb 12 20:23:47.041945 extend-filesystems[1160]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:23:47.055713 systemd[1]: Created slice system-sshd.slice. Feb 12 20:23:47.070307 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:23:47.113396 tar[1111]: ./bridge Feb 12 20:23:47.161177 tar[1111]: ./tuning Feb 12 20:23:47.187548 tar[1111]: ./firewall Feb 12 20:23:47.201167 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:23:47.214864 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:23:47.234725 locksmithd[1157]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:23:47.264950 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:23:47.266695 extend-filesystems[1160]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:23:47.266695 extend-filesystems[1160]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:23:47.266695 extend-filesystems[1160]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:23:47.285473 extend-filesystems[1100]: Resized filesystem in /dev/vda9 Feb 12 20:23:47.288313 bash[1152]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267087683Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267126266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267162834Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267203240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267217066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267229499Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267241151Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267254506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267266098Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267277640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267289362Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267300182Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267382547Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:23:47.288393 env[1114]: time="2024-02-12T20:23:47.267460984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:23:47.267371 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267666489Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267686867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267698900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267738825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267749084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267760065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267769633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267779982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267790742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267800651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267810599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267821119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267918993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267931276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.288773 env[1114]: time="2024-02-12T20:23:47.267941194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.267495 systemd[1]: Finished extend-filesystems.service. Feb 12 20:23:47.289118 env[1114]: time="2024-02-12T20:23:47.267950812Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:23:47.289118 env[1114]: time="2024-02-12T20:23:47.267964428Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:23:47.289118 env[1114]: time="2024-02-12T20:23:47.267974306Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:23:47.289118 env[1114]: time="2024-02-12T20:23:47.267990427Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:23:47.289118 env[1114]: time="2024-02-12T20:23:47.268024721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:23:47.287434 systemd[1]: Started containerd.service. Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.268308423Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.268373295Z" level=info msg="Connect containerd service" Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.268455098Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.268878262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.269084509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.269115878Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:23:47.289268 env[1114]: time="2024-02-12T20:23:47.269163758Z" level=info msg="containerd successfully booted in 0.368711s" Feb 12 20:23:47.289680 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:23:47.298277 env[1114]: time="2024-02-12T20:23:47.298228014Z" level=info msg="Start subscribing containerd event" Feb 12 20:23:47.298320 env[1114]: time="2024-02-12T20:23:47.298299468Z" level=info msg="Start recovering state" Feb 12 20:23:47.298394 env[1114]: time="2024-02-12T20:23:47.298365953Z" level=info msg="Start event monitor" Feb 12 20:23:47.298394 env[1114]: time="2024-02-12T20:23:47.298382143Z" level=info msg="Start snapshots syncer" Feb 12 20:23:47.298456 env[1114]: time="2024-02-12T20:23:47.298402832Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:23:47.298456 env[1114]: time="2024-02-12T20:23:47.298411839Z" level=info msg="Start streaming server" Feb 12 20:23:47.310656 tar[1111]: ./host-device Feb 12 20:23:47.334479 sshd_keygen[1140]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:23:47.340305 tar[1111]: ./sbr Feb 12 20:23:47.352096 systemd[1]: Finished sshd-keygen.service. Feb 12 20:23:47.353903 systemd[1]: Starting issuegen.service... Feb 12 20:23:47.355128 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:55012.service. Feb 12 20:23:47.368428 tar[1111]: ./loopback Feb 12 20:23:47.365852 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:23:47.365974 systemd[1]: Finished issuegen.service. Feb 12 20:23:47.367641 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:23:47.374733 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:23:47.376277 tar[1113]: linux-amd64/LICENSE Feb 12 20:23:47.376277 tar[1113]: linux-amd64/README.md Feb 12 20:23:47.377619 systemd[1]: Started getty@tty1.service. Feb 12 20:23:47.379076 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:23:47.380122 systemd[1]: Reached target getty.target. Feb 12 20:23:47.383675 systemd[1]: Finished prepare-helm.service. Feb 12 20:23:47.394553 tar[1111]: ./dhcp Feb 12 20:23:47.405824 sshd[1177]: Accepted publickey for core from 10.0.0.1 port 55012 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:23:47.407104 sshd[1177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.414174 systemd[1]: Created slice user-500.slice. Feb 12 20:23:47.415987 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:23:47.418800 systemd-logind[1107]: New session 1 of user core. Feb 12 20:23:47.422332 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:23:47.423989 systemd[1]: Starting user@500.service... Feb 12 20:23:47.426101 (systemd)[1186]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.435897 systemd[1]: Finished prepare-critools.service. Feb 12 20:23:47.463691 tar[1111]: ./ptp Feb 12 20:23:47.487045 systemd[1186]: Queued start job for default target default.target. Feb 12 20:23:47.487455 systemd[1186]: Reached target paths.target. Feb 12 20:23:47.487475 systemd[1186]: Reached target sockets.target. Feb 12 20:23:47.487486 systemd[1186]: Reached target timers.target. Feb 12 20:23:47.487497 systemd[1186]: Reached target basic.target. Feb 12 20:23:47.487530 systemd[1186]: Reached target default.target. Feb 12 20:23:47.487549 systemd[1186]: Startup finished in 57ms. Feb 12 20:23:47.487607 systemd[1]: Started user@500.service. Feb 12 20:23:47.489062 systemd[1]: Started session-1.scope. Feb 12 20:23:47.491843 tar[1111]: ./ipvlan Feb 12 20:23:47.518939 tar[1111]: ./bandwidth Feb 12 20:23:47.540255 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:34926.service. Feb 12 20:23:47.554828 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:23:47.556524 systemd[1]: Reached target multi-user.target. Feb 12 20:23:47.559240 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:23:47.569006 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:23:47.569188 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:23:47.570234 systemd[1]: Startup finished in 499ms (kernel) + 6.112s (initrd) + 4.733s (userspace) = 11.346s. Feb 12 20:23:47.582884 sshd[1196]: Accepted publickey for core from 10.0.0.1 port 34926 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:23:47.584264 sshd[1196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.587551 systemd-logind[1107]: New session 2 of user core. Feb 12 20:23:47.588441 systemd[1]: Started session-2.scope. Feb 12 20:23:47.640836 sshd[1196]: pam_unix(sshd:session): session closed for user core Feb 12 20:23:47.643310 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:34926.service: Deactivated successfully. Feb 12 20:23:47.643809 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:23:47.644339 systemd-logind[1107]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:23:47.645176 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:34928.service. Feb 12 20:23:47.645936 systemd-logind[1107]: Removed session 2. Feb 12 20:23:47.679871 sshd[1205]: Accepted publickey for core from 10.0.0.1 port 34928 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:23:47.681208 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.684323 systemd-logind[1107]: New session 3 of user core. Feb 12 20:23:47.684968 systemd[1]: Started session-3.scope. Feb 12 20:23:47.734470 sshd[1205]: pam_unix(sshd:session): session closed for user core Feb 12 20:23:47.736664 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:34942.service. Feb 12 20:23:47.737023 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:34928.service: Deactivated successfully. Feb 12 20:23:47.737587 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:23:47.738001 systemd-logind[1107]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:23:47.738711 systemd-logind[1107]: Removed session 3. Feb 12 20:23:47.769861 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 34942 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:23:47.771093 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.774352 systemd-logind[1107]: New session 4 of user core. Feb 12 20:23:47.774966 systemd[1]: Started session-4.scope. Feb 12 20:23:47.808296 systemd-networkd[1028]: eth0: Gained IPv6LL Feb 12 20:23:47.826911 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 12 20:23:47.829301 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:34942.service: Deactivated successfully. Feb 12 20:23:47.829748 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:23:47.830165 systemd-logind[1107]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:23:47.830993 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:34952.service. Feb 12 20:23:47.831630 systemd-logind[1107]: Removed session 4. Feb 12 20:23:47.862269 sshd[1217]: Accepted publickey for core from 10.0.0.1 port 34952 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:23:47.863189 sshd[1217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:23:47.866254 systemd-logind[1107]: New session 5 of user core. Feb 12 20:23:47.866938 systemd[1]: Started session-5.scope. Feb 12 20:23:47.920222 sudo[1220]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:23:47.920387 sudo[1220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:23:48.428984 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:23:48.433211 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:23:48.433452 systemd[1]: Reached target network-online.target. Feb 12 20:23:48.434467 systemd[1]: Starting docker.service... Feb 12 20:23:48.463727 env[1238]: time="2024-02-12T20:23:48.463663668Z" level=info msg="Starting up" Feb 12 20:23:48.465014 env[1238]: time="2024-02-12T20:23:48.464971261Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:23:48.465014 env[1238]: time="2024-02-12T20:23:48.464995727Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:23:48.465092 env[1238]: time="2024-02-12T20:23:48.465021735Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:23:48.465092 env[1238]: time="2024-02-12T20:23:48.465045951Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:23:48.466417 env[1238]: time="2024-02-12T20:23:48.466397466Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:23:48.466417 env[1238]: time="2024-02-12T20:23:48.466414908Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:23:48.466531 env[1238]: time="2024-02-12T20:23:48.466424847Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:23:48.466531 env[1238]: time="2024-02-12T20:23:48.466431910Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:23:49.108797 env[1238]: time="2024-02-12T20:23:49.108742381Z" level=info msg="Loading containers: start." Feb 12 20:23:49.192162 kernel: Initializing XFRM netlink socket Feb 12 20:23:49.217319 env[1238]: time="2024-02-12T20:23:49.217287872Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:23:49.269609 systemd-networkd[1028]: docker0: Link UP Feb 12 20:23:49.278611 env[1238]: time="2024-02-12T20:23:49.278588183Z" level=info msg="Loading containers: done." Feb 12 20:23:49.289183 env[1238]: time="2024-02-12T20:23:49.289124476Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:23:49.289355 env[1238]: time="2024-02-12T20:23:49.289287963Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:23:49.289389 env[1238]: time="2024-02-12T20:23:49.289380447Z" level=info msg="Daemon has completed initialization" Feb 12 20:23:49.302374 systemd[1]: Started docker.service. Feb 12 20:23:49.308735 env[1238]: time="2024-02-12T20:23:49.308690316Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:23:49.322182 systemd[1]: Reloading. Feb 12 20:23:49.378221 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2024-02-12T20:23:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:23:49.378928 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2024-02-12T20:23:49Z" level=info msg="torcx already run" Feb 12 20:23:49.439283 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:23:49.439303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:23:49.459559 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:23:49.535182 systemd[1]: Started kubelet.service. Feb 12 20:23:49.578236 kubelet[1417]: E0212 20:23:49.578175 1417 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:23:49.580083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:23:49.580262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:23:49.921533 env[1114]: time="2024-02-12T20:23:49.921486727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:23:50.555272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663174724.mount: Deactivated successfully. Feb 12 20:23:52.240023 env[1114]: time="2024-02-12T20:23:52.239969128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:52.241642 env[1114]: time="2024-02-12T20:23:52.241603553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:52.243124 env[1114]: time="2024-02-12T20:23:52.243104809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:52.244484 env[1114]: time="2024-02-12T20:23:52.244457737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:52.245077 env[1114]: time="2024-02-12T20:23:52.245054586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 20:23:52.252617 env[1114]: time="2024-02-12T20:23:52.252583819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:23:54.609726 env[1114]: time="2024-02-12T20:23:54.609662245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.611926 env[1114]: time="2024-02-12T20:23:54.611899170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.613753 env[1114]: time="2024-02-12T20:23:54.613729924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.615677 env[1114]: time="2024-02-12T20:23:54.615650918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.616320 env[1114]: time="2024-02-12T20:23:54.616186943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 20:23:54.624647 env[1114]: time="2024-02-12T20:23:54.624605484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:23:55.803706 env[1114]: time="2024-02-12T20:23:55.803641929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:55.805783 env[1114]: time="2024-02-12T20:23:55.805738181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:55.807369 env[1114]: time="2024-02-12T20:23:55.807344254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:55.809242 env[1114]: time="2024-02-12T20:23:55.809210234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:55.810243 env[1114]: time="2024-02-12T20:23:55.810205891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 20:23:55.819410 env[1114]: time="2024-02-12T20:23:55.819362987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:23:57.275214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669913739.mount: Deactivated successfully. Feb 12 20:23:57.724752 env[1114]: time="2024-02-12T20:23:57.724689415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:57.726586 env[1114]: time="2024-02-12T20:23:57.726559683Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:57.727860 env[1114]: time="2024-02-12T20:23:57.727832521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:57.729258 env[1114]: time="2024-02-12T20:23:57.729208061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:57.729580 env[1114]: time="2024-02-12T20:23:57.729552868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:23:57.739984 env[1114]: time="2024-02-12T20:23:57.739940382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:23:58.187922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596023009.mount: Deactivated successfully. Feb 12 20:23:58.198420 env[1114]: time="2024-02-12T20:23:58.198366969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:58.201422 env[1114]: time="2024-02-12T20:23:58.201381082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:58.203180 env[1114]: time="2024-02-12T20:23:58.203128801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:58.204651 env[1114]: time="2024-02-12T20:23:58.204620699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:58.205185 env[1114]: time="2024-02-12T20:23:58.205162936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:23:58.214127 env[1114]: time="2024-02-12T20:23:58.214076225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:23:59.747869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:23:59.748024 systemd[1]: Stopped kubelet.service. Feb 12 20:23:59.749411 systemd[1]: Started kubelet.service. Feb 12 20:23:59.809732 kubelet[1475]: E0212 20:23:59.809683 1475 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:23:59.812928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:23:59.813081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:23:59.913798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290589489.mount: Deactivated successfully. Feb 12 20:24:05.698268 env[1114]: time="2024-02-12T20:24:05.698210231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:05.700078 env[1114]: time="2024-02-12T20:24:05.700054831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:05.701737 env[1114]: time="2024-02-12T20:24:05.701712971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:05.703318 env[1114]: time="2024-02-12T20:24:05.703300749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:05.703724 env[1114]: time="2024-02-12T20:24:05.703705449Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 20:24:05.711536 env[1114]: time="2024-02-12T20:24:05.711504487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:24:06.229646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849019005.mount: Deactivated successfully. Feb 12 20:24:06.864303 env[1114]: time="2024-02-12T20:24:06.864234880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:06.865935 env[1114]: time="2024-02-12T20:24:06.865895826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:06.867387 env[1114]: time="2024-02-12T20:24:06.867358649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:06.868555 env[1114]: time="2024-02-12T20:24:06.868518525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:06.868907 env[1114]: time="2024-02-12T20:24:06.868879372Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 20:24:09.367675 systemd[1]: Stopped kubelet.service. Feb 12 20:24:09.379760 systemd[1]: Reloading. Feb 12 20:24:09.434293 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-12T20:24:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:09.434320 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-12T20:24:09Z" level=info msg="torcx already run" Feb 12 20:24:09.491804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:09.491820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:09.510308 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:09.580311 systemd[1]: Started kubelet.service. Feb 12 20:24:09.625286 kubelet[1622]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:09.625286 kubelet[1622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:09.625640 kubelet[1622]: I0212 20:24:09.625593 1622 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:09.627059 kubelet[1622]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:09.627059 kubelet[1622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:09.947279 kubelet[1622]: I0212 20:24:09.947163 1622 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:24:09.947279 kubelet[1622]: I0212 20:24:09.947192 1622 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:09.947445 kubelet[1622]: I0212 20:24:09.947393 1622 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:24:09.949674 kubelet[1622]: I0212 20:24:09.949644 1622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:09.952560 kubelet[1622]: E0212 20:24:09.952535 1622 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.954756 kubelet[1622]: I0212 20:24:09.954740 1622 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:09.954908 kubelet[1622]: I0212 20:24:09.954895 1622 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:09.954963 kubelet[1622]: I0212 20:24:09.954952 1622 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:24:09.955044 kubelet[1622]: I0212 20:24:09.954971 1622 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:24:09.955044 kubelet[1622]: I0212 20:24:09.954982 1622 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:24:09.955091 kubelet[1622]: I0212 20:24:09.955052 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:09.959835 kubelet[1622]: I0212 20:24:09.959816 1622 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:24:09.959897 kubelet[1622]: I0212 20:24:09.959841 1622 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:09.959897 kubelet[1622]: I0212 20:24:09.959875 1622 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:24:09.959897 kubelet[1622]: I0212 20:24:09.959895 1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:09.960404 kubelet[1622]: W0212 20:24:09.960340 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.960462 kubelet[1622]: E0212 20:24:09.960419 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.960704 kubelet[1622]: W0212 20:24:09.960656 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.960756 kubelet[1622]: E0212 20:24:09.960713 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.960927 kubelet[1622]: I0212 20:24:09.960911 1622 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:09.961327 kubelet[1622]: W0212 20:24:09.961299 1622 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:24:09.961740 kubelet[1622]: I0212 20:24:09.961717 1622 server.go:1186] "Started kubelet" Feb 12 20:24:09.961913 kubelet[1622]: I0212 20:24:09.961896 1622 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:09.962525 kubelet[1622]: E0212 20:24:09.962396 1622 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b337443f3b2736", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 9, 961695030, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 9, 961695030, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.70:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.70:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:24:09.962887 kubelet[1622]: E0212 20:24:09.962849 1622 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:09.962937 kubelet[1622]: E0212 20:24:09.962892 1622 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:09.964673 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:24:09.964802 kubelet[1622]: I0212 20:24:09.964769 1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:09.965166 kubelet[1622]: I0212 20:24:09.964909 1622 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:24:09.965166 kubelet[1622]: I0212 20:24:09.965128 1622 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:24:09.965578 kubelet[1622]: E0212 20:24:09.965568 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:09.965969 kubelet[1622]: I0212 20:24:09.965955 1622 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:09.966288 kubelet[1622]: W0212 20:24:09.966262 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.966374 kubelet[1622]: E0212 20:24:09.966360 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.966564 kubelet[1622]: E0212 20:24:09.966187 1622 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:09.986369 kubelet[1622]: I0212 20:24:09.986346 1622 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:09.986369 kubelet[1622]: I0212 20:24:09.986363 1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:09.986369 kubelet[1622]: I0212 20:24:09.986376 1622 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:09.989169 kubelet[1622]: I0212 20:24:09.989151 1622 policy_none.go:49] "None policy: Start" Feb 12 20:24:09.989503 kubelet[1622]: I0212 20:24:09.989491 1622 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:09.989556 kubelet[1622]: I0212 20:24:09.989508 1622 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:09.993525 systemd[1]: Created slice kubepods.slice. Feb 12 20:24:09.996776 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:24:09.997489 kubelet[1622]: I0212 20:24:09.997458 1622 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:24:09.998890 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:24:10.004850 kubelet[1622]: I0212 20:24:10.004826 1622 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:10.005006 kubelet[1622]: I0212 20:24:10.004986 1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:10.005628 kubelet[1622]: E0212 20:24:10.005616 1622 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 20:24:10.012885 kubelet[1622]: I0212 20:24:10.012868 1622 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:24:10.012885 kubelet[1622]: I0212 20:24:10.012884 1622 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:24:10.012983 kubelet[1622]: I0212 20:24:10.012898 1622 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:24:10.012983 kubelet[1622]: E0212 20:24:10.012932 1622 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:24:10.013523 kubelet[1622]: W0212 20:24:10.013491 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:10.013567 kubelet[1622]: E0212 20:24:10.013529 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:10.066995 kubelet[1622]: I0212 20:24:10.066971 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:10.067281 kubelet[1622]: E0212 20:24:10.067265 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 12 20:24:10.113466 kubelet[1622]: I0212 20:24:10.113415 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:10.114483 kubelet[1622]: I0212 20:24:10.114461 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:10.115006 kubelet[1622]: I0212 20:24:10.114992 1622 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:10.115938 kubelet[1622]: I0212 20:24:10.115911 1622 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.70:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.70:6443: connect: connection refused" Feb 12 20:24:10.116503 kubelet[1622]: I0212 20:24:10.116074 1622 status_manager.go:698] "Failed to get status for pod" podUID=20c4a9ff048baf683c2a68ea284c1ce6 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.70:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.70:6443: connect: connection refused" Feb 12 20:24:10.116503 kubelet[1622]: I0212 20:24:10.116427 1622 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.70:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.70:6443: connect: connection refused" Feb 12 20:24:10.119353 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 12 20:24:10.140886 systemd[1]: Created slice kubepods-burstable-pod20c4a9ff048baf683c2a68ea284c1ce6.slice. Feb 12 20:24:10.154714 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 12 20:24:10.167239 kubelet[1622]: E0212 20:24:10.167205 1622 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:10.267836 kubelet[1622]: I0212 20:24:10.267675 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:10.267836 kubelet[1622]: I0212 20:24:10.267720 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:10.267836 kubelet[1622]: I0212 20:24:10.267739 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:10.267836 kubelet[1622]: I0212 20:24:10.267757 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:10.267836 kubelet[1622]: I0212 20:24:10.267789 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:10.268079 kubelet[1622]: I0212 20:24:10.267824 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:10.268079 kubelet[1622]: I0212 20:24:10.267846 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:10.268079 kubelet[1622]: I0212 20:24:10.267864 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:24:10.268079 kubelet[1622]: I0212 20:24:10.267881 1622 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:10.268835 kubelet[1622]: I0212 20:24:10.268818 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:10.269157 kubelet[1622]: E0212 20:24:10.269113 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 12 20:24:10.439758 kubelet[1622]: E0212 20:24:10.439719 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:10.440343 env[1114]: time="2024-02-12T20:24:10.440304054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:10.453655 kubelet[1622]: E0212 20:24:10.453620 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:10.453946 env[1114]: time="2024-02-12T20:24:10.453913261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:20c4a9ff048baf683c2a68ea284c1ce6,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:10.456000 kubelet[1622]: E0212 20:24:10.455982 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:10.456252 env[1114]: time="2024-02-12T20:24:10.456225929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:10.568053 kubelet[1622]: E0212 20:24:10.568017 1622 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:10.670333 kubelet[1622]: I0212 20:24:10.670311 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:10.670630 kubelet[1622]: E0212 20:24:10.670614 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 12 20:24:10.755099 kubelet[1622]: E0212 20:24:10.754977 1622 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b337443f3b2736", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 9, 961695030, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 9, 961695030, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.70:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.70:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:24:11.087621 kubelet[1622]: W0212 20:24:11.087543 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.087621 kubelet[1622]: E0212 20:24:11.087614 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.213436 kubelet[1622]: W0212 20:24:11.213375 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.213436 kubelet[1622]: E0212 20:24:11.213420 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.352274 kubelet[1622]: W0212 20:24:11.352126 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.352274 kubelet[1622]: E0212 20:24:11.352195 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.368754 kubelet[1622]: E0212 20:24:11.368708 1622 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.472258 kubelet[1622]: I0212 20:24:11.472223 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:11.472631 kubelet[1622]: E0212 20:24:11.472604 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 12 20:24:11.549240 kubelet[1622]: W0212 20:24:11.549168 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.549240 kubelet[1622]: E0212 20:24:11.549224 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:11.968887 kubelet[1622]: E0212 20:24:11.968849 1622 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:12.969963 kubelet[1622]: E0212 20:24:12.969919 1622 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:12.972470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408645864.mount: Deactivated successfully. Feb 12 20:24:12.984879 kubelet[1622]: W0212 20:24:12.984333 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:12.984879 kubelet[1622]: E0212 20:24:12.984380 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.064112 kubelet[1622]: W0212 20:24:13.063942 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.064112 kubelet[1622]: E0212 20:24:13.064006 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.077236 kubelet[1622]: I0212 20:24:13.077196 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:13.077845 kubelet[1622]: E0212 20:24:13.077829 1622 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Feb 12 20:24:13.081601 env[1114]: time="2024-02-12T20:24:13.081533916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.169977 env[1114]: time="2024-02-12T20:24:13.169906738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.207773 env[1114]: time="2024-02-12T20:24:13.206931817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.230858 env[1114]: time="2024-02-12T20:24:13.227636433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.268219 env[1114]: time="2024-02-12T20:24:13.267077483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.286747 env[1114]: time="2024-02-12T20:24:13.285924174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.302771 kubelet[1622]: W0212 20:24:13.299197 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.302771 kubelet[1622]: E0212 20:24:13.299246 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.314676 env[1114]: time="2024-02-12T20:24:13.314389907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.364408 env[1114]: time="2024-02-12T20:24:13.364060715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.380762 env[1114]: time="2024-02-12T20:24:13.377489063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.397761 env[1114]: time="2024-02-12T20:24:13.395983413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.417384 env[1114]: time="2024-02-12T20:24:13.416005379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.445009 env[1114]: time="2024-02-12T20:24:13.444856225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:13.691563 env[1114]: time="2024-02-12T20:24:13.691176481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:13.691563 env[1114]: time="2024-02-12T20:24:13.691279294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:13.691563 env[1114]: time="2024-02-12T20:24:13.691306645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:13.691563 env[1114]: time="2024-02-12T20:24:13.691440647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca pid=1700 runtime=io.containerd.runc.v2 Feb 12 20:24:13.718962 systemd[1]: Started cri-containerd-7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca.scope. Feb 12 20:24:13.750570 kubelet[1622]: W0212 20:24:13.750497 1622 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.750570 kubelet[1622]: E0212 20:24:13.750544 1622 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Feb 12 20:24:13.808886 env[1114]: time="2024-02-12T20:24:13.808787396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:13.808886 env[1114]: time="2024-02-12T20:24:13.808835376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:13.808886 env[1114]: time="2024-02-12T20:24:13.808849482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:13.809156 env[1114]: time="2024-02-12T20:24:13.808985327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/511334c23297e82ff216553c6dcd863677c335e5ce3d3dec9c875b398f006a38 pid=1731 runtime=io.containerd.runc.v2 Feb 12 20:24:13.825530 env[1114]: time="2024-02-12T20:24:13.822560580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:13.825530 env[1114]: time="2024-02-12T20:24:13.822618008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:13.825530 env[1114]: time="2024-02-12T20:24:13.822633707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:13.825530 env[1114]: time="2024-02-12T20:24:13.822808525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc61085b1b9b83f5739374e5983f45f9a1d20fb9f5548d416aaf1d76fbcad633 pid=1756 runtime=io.containerd.runc.v2 Feb 12 20:24:13.827781 env[1114]: time="2024-02-12T20:24:13.827722352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:20c4a9ff048baf683c2a68ea284c1ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca\"" Feb 12 20:24:13.829397 kubelet[1622]: E0212 20:24:13.829235 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:13.836805 env[1114]: time="2024-02-12T20:24:13.836752139Z" level=info msg="CreateContainer within sandbox \"7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:24:13.848987 systemd[1]: Started cri-containerd-511334c23297e82ff216553c6dcd863677c335e5ce3d3dec9c875b398f006a38.scope. Feb 12 20:24:13.872850 systemd[1]: Started cri-containerd-dc61085b1b9b83f5739374e5983f45f9a1d20fb9f5548d416aaf1d76fbcad633.scope. Feb 12 20:24:13.948764 env[1114]: time="2024-02-12T20:24:13.948605174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"511334c23297e82ff216553c6dcd863677c335e5ce3d3dec9c875b398f006a38\"" Feb 12 20:24:13.951346 env[1114]: time="2024-02-12T20:24:13.950985418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc61085b1b9b83f5739374e5983f45f9a1d20fb9f5548d416aaf1d76fbcad633\"" Feb 12 20:24:13.951497 systemd[1]: run-containerd-runc-k8s.io-7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca-runc.Nz30to.mount: Deactivated successfully. Feb 12 20:24:13.955444 kubelet[1622]: E0212 20:24:13.955415 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:13.955626 kubelet[1622]: E0212 20:24:13.955591 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:13.960247 env[1114]: time="2024-02-12T20:24:13.960192969Z" level=info msg="CreateContainer within sandbox \"dc61085b1b9b83f5739374e5983f45f9a1d20fb9f5548d416aaf1d76fbcad633\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:24:13.961709 env[1114]: time="2024-02-12T20:24:13.961658198Z" level=info msg="CreateContainer within sandbox \"511334c23297e82ff216553c6dcd863677c335e5ce3d3dec9c875b398f006a38\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:24:14.038613 env[1114]: time="2024-02-12T20:24:14.037735837Z" level=info msg="CreateContainer within sandbox \"7724ab36594c748403ba59d10ad715d786df142011f78b7120ece046c58bf6ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26e49cdd2c8942c0dc4fe5c91c19835eb9d46548888dbe5038147397874aadea\"" Feb 12 20:24:14.039222 env[1114]: time="2024-02-12T20:24:14.039184545Z" level=info msg="StartContainer for \"26e49cdd2c8942c0dc4fe5c91c19835eb9d46548888dbe5038147397874aadea\"" Feb 12 20:24:14.102170 systemd[1]: Started cri-containerd-26e49cdd2c8942c0dc4fe5c91c19835eb9d46548888dbe5038147397874aadea.scope. Feb 12 20:24:14.228680 env[1114]: time="2024-02-12T20:24:14.228021850Z" level=info msg="StartContainer for \"26e49cdd2c8942c0dc4fe5c91c19835eb9d46548888dbe5038147397874aadea\" returns successfully" Feb 12 20:24:14.307313 env[1114]: time="2024-02-12T20:24:14.306966357Z" level=info msg="CreateContainer within sandbox \"511334c23297e82ff216553c6dcd863677c335e5ce3d3dec9c875b398f006a38\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85df8a5cab8f780bf88c6f32240400c91b4183ebbb2ec7509eb6e4c9b281dcfb\"" Feb 12 20:24:14.322008 env[1114]: time="2024-02-12T20:24:14.321000080Z" level=info msg="StartContainer for \"85df8a5cab8f780bf88c6f32240400c91b4183ebbb2ec7509eb6e4c9b281dcfb\"" Feb 12 20:24:14.355820 systemd[1]: Started cri-containerd-85df8a5cab8f780bf88c6f32240400c91b4183ebbb2ec7509eb6e4c9b281dcfb.scope. Feb 12 20:24:14.357993 env[1114]: time="2024-02-12T20:24:14.357953905Z" level=info msg="CreateContainer within sandbox \"dc61085b1b9b83f5739374e5983f45f9a1d20fb9f5548d416aaf1d76fbcad633\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b5a6c514dbad4b9a583a9d393adcb50161094b59147c87699de57d8fe38efb2\"" Feb 12 20:24:14.358893 env[1114]: time="2024-02-12T20:24:14.358747343Z" level=info msg="StartContainer for \"8b5a6c514dbad4b9a583a9d393adcb50161094b59147c87699de57d8fe38efb2\"" Feb 12 20:24:14.378151 systemd[1]: Started cri-containerd-8b5a6c514dbad4b9a583a9d393adcb50161094b59147c87699de57d8fe38efb2.scope. Feb 12 20:24:14.406810 env[1114]: time="2024-02-12T20:24:14.406758719Z" level=info msg="StartContainer for \"85df8a5cab8f780bf88c6f32240400c91b4183ebbb2ec7509eb6e4c9b281dcfb\" returns successfully" Feb 12 20:24:14.427883 env[1114]: time="2024-02-12T20:24:14.427795117Z" level=info msg="StartContainer for \"8b5a6c514dbad4b9a583a9d393adcb50161094b59147c87699de57d8fe38efb2\" returns successfully" Feb 12 20:24:14.946639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254719743.mount: Deactivated successfully. Feb 12 20:24:15.056054 kubelet[1622]: E0212 20:24:15.055685 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:15.057778 kubelet[1622]: E0212 20:24:15.057740 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:15.059107 kubelet[1622]: E0212 20:24:15.059068 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:16.063562 kubelet[1622]: E0212 20:24:16.063524 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:16.063562 kubelet[1622]: E0212 20:24:16.063551 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:16.064027 kubelet[1622]: E0212 20:24:16.064003 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:16.235199 kubelet[1622]: E0212 20:24:16.234515 1622 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 20:24:16.279749 kubelet[1622]: I0212 20:24:16.279717 1622 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:16.283866 kubelet[1622]: I0212 20:24:16.283832 1622 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:24:16.290752 kubelet[1622]: E0212 20:24:16.290726 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.391406 kubelet[1622]: E0212 20:24:16.391277 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.491500 kubelet[1622]: E0212 20:24:16.491458 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.592412 kubelet[1622]: E0212 20:24:16.592369 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.692942 kubelet[1622]: E0212 20:24:16.692850 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.793372 kubelet[1622]: E0212 20:24:16.793323 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.893989 kubelet[1622]: E0212 20:24:16.893951 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:16.994766 kubelet[1622]: E0212 20:24:16.994669 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.062185 kubelet[1622]: E0212 20:24:17.062166 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:17.062533 kubelet[1622]: E0212 20:24:17.062510 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:17.095662 kubelet[1622]: E0212 20:24:17.095626 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.196240 kubelet[1622]: E0212 20:24:17.196200 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.296738 kubelet[1622]: E0212 20:24:17.296698 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.397294 kubelet[1622]: E0212 20:24:17.397269 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.498174 kubelet[1622]: E0212 20:24:17.498046 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.598658 kubelet[1622]: E0212 20:24:17.598529 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.699205 kubelet[1622]: E0212 20:24:17.699164 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.799770 kubelet[1622]: E0212 20:24:17.799677 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:17.900382 kubelet[1622]: E0212 20:24:17.900238 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:18.000998 kubelet[1622]: E0212 20:24:18.000944 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:18.063734 kubelet[1622]: E0212 20:24:18.063719 1622 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:18.101800 kubelet[1622]: E0212 20:24:18.101786 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:18.149322 systemd[1]: Reloading. Feb 12 20:24:18.202305 kubelet[1622]: E0212 20:24:18.202229 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:18.227700 /usr/lib/systemd/system-generators/torcx-generator[1953]: time="2024-02-12T20:24:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:18.227728 /usr/lib/systemd/system-generators/torcx-generator[1953]: time="2024-02-12T20:24:18Z" level=info msg="torcx already run" Feb 12 20:24:18.291448 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:18.291464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:18.302931 kubelet[1622]: E0212 20:24:18.302903 1622 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:24:18.309954 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:18.394382 systemd[1]: Stopping kubelet.service... Feb 12 20:24:18.412470 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:24:18.412648 systemd[1]: Stopped kubelet.service. Feb 12 20:24:18.414017 systemd[1]: Started kubelet.service. Feb 12 20:24:18.469796 kubelet[1994]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:18.469796 kubelet[1994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:18.469796 kubelet[1994]: I0212 20:24:18.469661 1994 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:18.470942 kubelet[1994]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:18.470942 kubelet[1994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:18.473382 kubelet[1994]: I0212 20:24:18.473358 1994 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:24:18.473382 kubelet[1994]: I0212 20:24:18.473374 1994 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:18.473548 kubelet[1994]: I0212 20:24:18.473529 1994 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:24:18.475985 kubelet[1994]: I0212 20:24:18.475960 1994 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:24:18.476589 kubelet[1994]: I0212 20:24:18.476565 1994 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:18.480028 kubelet[1994]: I0212 20:24:18.480005 1994 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:18.480230 kubelet[1994]: I0212 20:24:18.480210 1994 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:18.480286 kubelet[1994]: I0212 20:24:18.480275 1994 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:24:18.480360 kubelet[1994]: I0212 20:24:18.480296 1994 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:24:18.480360 kubelet[1994]: I0212 20:24:18.480306 1994 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:24:18.480360 kubelet[1994]: I0212 20:24:18.480334 1994 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:18.483085 kubelet[1994]: I0212 20:24:18.483067 1994 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:24:18.483085 kubelet[1994]: I0212 20:24:18.483087 1994 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:18.483203 kubelet[1994]: I0212 20:24:18.483108 1994 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:24:18.483203 kubelet[1994]: I0212 20:24:18.483122 1994 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:18.484958 kubelet[1994]: I0212 20:24:18.484944 1994 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:18.485539 kubelet[1994]: I0212 20:24:18.485523 1994 server.go:1186] "Started kubelet" Feb 12 20:24:18.487998 kubelet[1994]: I0212 20:24:18.487976 1994 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:18.490353 kubelet[1994]: E0212 20:24:18.490328 1994 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:18.490417 kubelet[1994]: E0212 20:24:18.490368 1994 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:18.490950 kubelet[1994]: I0212 20:24:18.490936 1994 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:18.491945 kubelet[1994]: I0212 20:24:18.491930 1994 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:24:18.494053 kubelet[1994]: I0212 20:24:18.494035 1994 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:24:18.495514 kubelet[1994]: I0212 20:24:18.495499 1994 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:18.510089 kubelet[1994]: I0212 20:24:18.510065 1994 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:24:18.535763 kubelet[1994]: I0212 20:24:18.535735 1994 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:24:18.535949 kubelet[1994]: I0212 20:24:18.535934 1994 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:24:18.536042 kubelet[1994]: I0212 20:24:18.536027 1994 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:24:18.536229 kubelet[1994]: E0212 20:24:18.536215 1994 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:24:18.539447 sudo[2043]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:24:18.539727 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:24:18.549632 kubelet[1994]: I0212 20:24:18.549610 1994 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:18.549632 kubelet[1994]: I0212 20:24:18.549628 1994 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:18.549731 kubelet[1994]: I0212 20:24:18.549643 1994 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:18.549793 kubelet[1994]: I0212 20:24:18.549779 1994 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:24:18.549824 kubelet[1994]: I0212 20:24:18.549795 1994 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:24:18.549824 kubelet[1994]: I0212 20:24:18.549801 1994 policy_none.go:49] "None policy: Start" Feb 12 20:24:18.550661 kubelet[1994]: I0212 20:24:18.550645 1994 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:18.550661 kubelet[1994]: I0212 20:24:18.550663 1994 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:18.550780 kubelet[1994]: I0212 20:24:18.550766 1994 state_mem.go:75] "Updated machine memory state" Feb 12 20:24:18.557431 kubelet[1994]: I0212 20:24:18.556507 1994 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:18.558001 kubelet[1994]: I0212 20:24:18.557977 1994 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:18.597752 kubelet[1994]: I0212 20:24:18.597731 1994 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:24:18.603969 kubelet[1994]: I0212 20:24:18.603946 1994 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 20:24:18.604042 kubelet[1994]: I0212 20:24:18.604006 1994 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:24:18.637243 kubelet[1994]: I0212 20:24:18.637206 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:18.637356 kubelet[1994]: I0212 20:24:18.637340 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:18.637428 kubelet[1994]: I0212 20:24:18.637385 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:18.696099 kubelet[1994]: I0212 20:24:18.696069 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:18.696211 kubelet[1994]: I0212 20:24:18.696133 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:18.696242 kubelet[1994]: I0212 20:24:18.696219 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:18.696320 kubelet[1994]: I0212 20:24:18.696306 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:18.696385 kubelet[1994]: I0212 20:24:18.696335 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:18.696385 kubelet[1994]: I0212 20:24:18.696360 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:18.696385 kubelet[1994]: I0212 20:24:18.696385 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:18.696482 kubelet[1994]: I0212 20:24:18.696414 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c4a9ff048baf683c2a68ea284c1ce6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"20c4a9ff048baf683c2a68ea284c1ce6\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:18.696482 kubelet[1994]: I0212 20:24:18.696441 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:24:18.887902 kubelet[1994]: E0212 20:24:18.887870 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:18.942496 kubelet[1994]: E0212 20:24:18.942448 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:18.988493 kubelet[1994]: E0212 20:24:18.988450 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:19.000134 sudo[2043]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:19.485190 kubelet[1994]: I0212 20:24:19.485135 1994 apiserver.go:52] "Watching apiserver" Feb 12 20:24:19.496614 kubelet[1994]: I0212 20:24:19.496583 1994 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:24:19.501766 kubelet[1994]: I0212 20:24:19.501730 1994 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:24:19.888258 kubelet[1994]: E0212 20:24:19.888211 1994 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 20:24:19.888677 kubelet[1994]: E0212 20:24:19.888657 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:20.088507 kubelet[1994]: E0212 20:24:20.088451 1994 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 20:24:20.088771 kubelet[1994]: E0212 20:24:20.088737 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:20.147138 sudo[1220]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:20.148313 sshd[1217]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:20.150894 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:34952.service: Deactivated successfully. Feb 12 20:24:20.151832 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:24:20.152028 systemd[1]: session-5.scope: Consumed 4.075s CPU time. Feb 12 20:24:20.152464 systemd-logind[1107]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:24:20.153158 systemd-logind[1107]: Removed session 5. Feb 12 20:24:20.290003 kubelet[1994]: E0212 20:24:20.289964 1994 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:24:20.290479 kubelet[1994]: E0212 20:24:20.290464 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:20.491593 kubelet[1994]: I0212 20:24:20.491463 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.491393261 pod.CreationTimestamp="2024-02-12 20:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:20.491285242 +0000 UTC m=+2.073636451" watchObservedRunningTime="2024-02-12 20:24:20.491393261 +0000 UTC m=+2.073744470" Feb 12 20:24:20.547989 kubelet[1994]: E0212 20:24:20.547961 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:20.547989 kubelet[1994]: E0212 20:24:20.547983 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:20.548319 kubelet[1994]: E0212 20:24:20.548284 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:21.295397 kubelet[1994]: I0212 20:24:21.295355 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.295321248 pod.CreationTimestamp="2024-02-12 20:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:20.889352953 +0000 UTC m=+2.471704162" watchObservedRunningTime="2024-02-12 20:24:21.295321248 +0000 UTC m=+2.877672457" Feb 12 20:24:21.549675 kubelet[1994]: E0212 20:24:21.549557 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:21.549675 kubelet[1994]: E0212 20:24:21.549602 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:24.601400 kubelet[1994]: E0212 20:24:24.601357 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:24.649212 kubelet[1994]: I0212 20:24:24.649179 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.649120871 pod.CreationTimestamp="2024-02-12 20:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:21.295592251 +0000 UTC m=+2.877943470" watchObservedRunningTime="2024-02-12 20:24:24.649120871 +0000 UTC m=+6.231472090" Feb 12 20:24:25.555175 kubelet[1994]: E0212 20:24:25.554632 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:26.341238 kubelet[1994]: E0212 20:24:26.341202 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:26.556215 kubelet[1994]: E0212 20:24:26.556185 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:27.559778 kubelet[1994]: E0212 20:24:27.559601 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:31.517537 kubelet[1994]: E0212 20:24:31.517496 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:32.061592 kubelet[1994]: I0212 20:24:32.061563 1994 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:24:32.061937 env[1114]: time="2024-02-12T20:24:32.061900438Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:24:32.062219 kubelet[1994]: I0212 20:24:32.062092 1994 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:24:32.143567 kubelet[1994]: I0212 20:24:32.143532 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:32.146742 kubelet[1994]: I0212 20:24:32.146722 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:32.147747 systemd[1]: Created slice kubepods-besteffort-pod3d322053_989d_410b_a547_d8808400e555.slice. Feb 12 20:24:32.163213 systemd[1]: Created slice kubepods-burstable-podb211d34d_d3bd_4b59_ac5c_e0c2e9372837.slice. Feb 12 20:24:32.182542 kubelet[1994]: I0212 20:24:32.182503 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6m2l\" (UniqueName: \"kubernetes.io/projected/3d322053-989d-410b-a547-d8808400e555-kube-api-access-x6m2l\") pod \"kube-proxy-k7brr\" (UID: \"3d322053-989d-410b-a547-d8808400e555\") " pod="kube-system/kube-proxy-k7brr" Feb 12 20:24:32.182542 kubelet[1994]: I0212 20:24:32.182541 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-run\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182748 kubelet[1994]: I0212 20:24:32.182558 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-net\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182748 kubelet[1994]: I0212 20:24:32.182575 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-cgroup\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182748 kubelet[1994]: I0212 20:24:32.182592 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-lib-modules\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182748 kubelet[1994]: I0212 20:24:32.182610 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpf7\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-kube-api-access-8kpf7\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182748 kubelet[1994]: I0212 20:24:32.182633 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-kernel\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182652 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-etc-cni-netd\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182668 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d322053-989d-410b-a547-d8808400e555-lib-modules\") pod \"kube-proxy-k7brr\" (UID: \"3d322053-989d-410b-a547-d8808400e555\") " pod="kube-system/kube-proxy-k7brr" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182686 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d322053-989d-410b-a547-d8808400e555-kube-proxy\") pod \"kube-proxy-k7brr\" (UID: \"3d322053-989d-410b-a547-d8808400e555\") " pod="kube-system/kube-proxy-k7brr" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182715 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-xtables-lock\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182740 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hubble-tls\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.182869 kubelet[1994]: I0212 20:24:32.182756 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cni-path\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.183000 kubelet[1994]: I0212 20:24:32.182774 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-clustermesh-secrets\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.183000 kubelet[1994]: I0212 20:24:32.182792 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-config-path\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.183000 kubelet[1994]: I0212 20:24:32.182808 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d322053-989d-410b-a547-d8808400e555-xtables-lock\") pod \"kube-proxy-k7brr\" (UID: \"3d322053-989d-410b-a547-d8808400e555\") " pod="kube-system/kube-proxy-k7brr" Feb 12 20:24:32.183000 kubelet[1994]: I0212 20:24:32.182824 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-bpf-maps\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.183000 kubelet[1994]: I0212 20:24:32.182842 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hostproc\") pod \"cilium-2wngv\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " pod="kube-system/cilium-2wngv" Feb 12 20:24:32.236268 update_engine[1108]: I0212 20:24:32.236218 1108 update_attempter.cc:509] Updating boot flags... Feb 12 20:24:32.255638 kubelet[1994]: I0212 20:24:32.255601 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:32.272736 systemd[1]: Created slice kubepods-besteffort-pode2e45038_92a3_42ab_bd1e_d9bc6c3f598d.slice. Feb 12 20:24:32.283340 kubelet[1994]: I0212 20:24:32.283319 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bwp\" (UniqueName: \"kubernetes.io/projected/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-kube-api-access-h2bwp\") pod \"cilium-operator-f59cbd8c6-x68jj\" (UID: \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\") " pod="kube-system/cilium-operator-f59cbd8c6-x68jj" Feb 12 20:24:32.286800 kubelet[1994]: I0212 20:24:32.286346 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-x68jj\" (UID: \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\") " pod="kube-system/cilium-operator-f59cbd8c6-x68jj" Feb 12 20:24:32.761467 kubelet[1994]: E0212 20:24:32.761438 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:32.762169 env[1114]: time="2024-02-12T20:24:32.762115888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7brr,Uid:3d322053-989d-410b-a547-d8808400e555,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:32.766404 kubelet[1994]: E0212 20:24:32.766366 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:32.767182 env[1114]: time="2024-02-12T20:24:32.767105760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wngv,Uid:b211d34d-d3bd-4b59-ac5c-e0c2e9372837,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:32.778022 env[1114]: time="2024-02-12T20:24:32.777954518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:32.778022 env[1114]: time="2024-02-12T20:24:32.777993281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:32.778022 env[1114]: time="2024-02-12T20:24:32.778003020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:32.778272 env[1114]: time="2024-02-12T20:24:32.778214322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b232bc028e69af6285b8940d39a01b1e45ffed23e21ffa0f40782b2ddc8bae6 pid=2121 runtime=io.containerd.runc.v2 Feb 12 20:24:32.789586 systemd[1]: Started cri-containerd-2b232bc028e69af6285b8940d39a01b1e45ffed23e21ffa0f40782b2ddc8bae6.scope. Feb 12 20:24:32.794118 env[1114]: time="2024-02-12T20:24:32.793252178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:32.794118 env[1114]: time="2024-02-12T20:24:32.793354373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:32.794118 env[1114]: time="2024-02-12T20:24:32.793398376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:32.794118 env[1114]: time="2024-02-12T20:24:32.793629035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa pid=2148 runtime=io.containerd.runc.v2 Feb 12 20:24:32.806854 systemd[1]: Started cri-containerd-988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa.scope. Feb 12 20:24:32.816873 env[1114]: time="2024-02-12T20:24:32.816834309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7brr,Uid:3d322053-989d-410b-a547-d8808400e555,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b232bc028e69af6285b8940d39a01b1e45ffed23e21ffa0f40782b2ddc8bae6\"" Feb 12 20:24:32.818347 kubelet[1994]: E0212 20:24:32.817936 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:32.819569 env[1114]: time="2024-02-12T20:24:32.819548071Z" level=info msg="CreateContainer within sandbox \"2b232bc028e69af6285b8940d39a01b1e45ffed23e21ffa0f40782b2ddc8bae6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:24:32.830547 env[1114]: time="2024-02-12T20:24:32.830510575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wngv,Uid:b211d34d-d3bd-4b59-ac5c-e0c2e9372837,Namespace:kube-system,Attempt:0,} returns sandbox id \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\"" Feb 12 20:24:32.831674 kubelet[1994]: E0212 20:24:32.831290 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:32.832353 env[1114]: time="2024-02-12T20:24:32.832334956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:24:32.842524 env[1114]: time="2024-02-12T20:24:32.842497008Z" level=info msg="CreateContainer within sandbox \"2b232bc028e69af6285b8940d39a01b1e45ffed23e21ffa0f40782b2ddc8bae6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9b2de50b09ae7aca22c52ab296945a0a28a0f3d2e5cc10ac22c7e9a8358a074\"" Feb 12 20:24:32.843158 env[1114]: time="2024-02-12T20:24:32.843127387Z" level=info msg="StartContainer for \"c9b2de50b09ae7aca22c52ab296945a0a28a0f3d2e5cc10ac22c7e9a8358a074\"" Feb 12 20:24:32.857710 systemd[1]: Started cri-containerd-c9b2de50b09ae7aca22c52ab296945a0a28a0f3d2e5cc10ac22c7e9a8358a074.scope. Feb 12 20:24:32.880837 env[1114]: time="2024-02-12T20:24:32.880794301Z" level=info msg="StartContainer for \"c9b2de50b09ae7aca22c52ab296945a0a28a0f3d2e5cc10ac22c7e9a8358a074\" returns successfully" Feb 12 20:24:33.188968 kubelet[1994]: E0212 20:24:33.188934 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:33.189355 env[1114]: time="2024-02-12T20:24:33.189309511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-x68jj,Uid:e2e45038-92a3-42ab-bd1e-d9bc6c3f598d,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:33.202881 env[1114]: time="2024-02-12T20:24:33.202805051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:33.202881 env[1114]: time="2024-02-12T20:24:33.202867409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:33.202881 env[1114]: time="2024-02-12T20:24:33.202881416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:33.203135 env[1114]: time="2024-02-12T20:24:33.203082799Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c pid=2346 runtime=io.containerd.runc.v2 Feb 12 20:24:33.212797 systemd[1]: Started cri-containerd-eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c.scope. Feb 12 20:24:33.243292 env[1114]: time="2024-02-12T20:24:33.243242819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-x68jj,Uid:e2e45038-92a3-42ab-bd1e-d9bc6c3f598d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\"" Feb 12 20:24:33.243950 kubelet[1994]: E0212 20:24:33.243921 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:33.565917 kubelet[1994]: E0212 20:24:33.565892 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:34.567679 kubelet[1994]: E0212 20:24:34.567611 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:40.885751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482662601.mount: Deactivated successfully. Feb 12 20:24:44.434257 env[1114]: time="2024-02-12T20:24:44.434205534Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.435849 env[1114]: time="2024-02-12T20:24:44.435795896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.437232 env[1114]: time="2024-02-12T20:24:44.437201851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.437728 env[1114]: time="2024-02-12T20:24:44.437698689Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:24:44.438198 env[1114]: time="2024-02-12T20:24:44.438118631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:24:44.439478 env[1114]: time="2024-02-12T20:24:44.439436369Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:24:44.449395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251634451.mount: Deactivated successfully. Feb 12 20:24:44.450937 env[1114]: time="2024-02-12T20:24:44.450891817Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\"" Feb 12 20:24:44.451365 env[1114]: time="2024-02-12T20:24:44.451342848Z" level=info msg="StartContainer for \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\"" Feb 12 20:24:44.470715 systemd[1]: Started cri-containerd-f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c.scope. Feb 12 20:24:44.494047 env[1114]: time="2024-02-12T20:24:44.494004261Z" level=info msg="StartContainer for \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\" returns successfully" Feb 12 20:24:44.501408 systemd[1]: cri-containerd-f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c.scope: Deactivated successfully. Feb 12 20:24:44.581952 kubelet[1994]: E0212 20:24:44.581924 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:44.605178 kubelet[1994]: I0212 20:24:44.605135 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k7brr" podStartSLOduration=12.60483609 pod.CreationTimestamp="2024-02-12 20:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:33.572070707 +0000 UTC m=+15.154421916" watchObservedRunningTime="2024-02-12 20:24:44.60483609 +0000 UTC m=+26.187187299" Feb 12 20:24:44.764717 env[1114]: time="2024-02-12T20:24:44.764563030Z" level=info msg="shim disconnected" id=f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c Feb 12 20:24:44.764717 env[1114]: time="2024-02-12T20:24:44.764642981Z" level=warning msg="cleaning up after shim disconnected" id=f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c namespace=k8s.io Feb 12 20:24:44.764717 env[1114]: time="2024-02-12T20:24:44.764659702Z" level=info msg="cleaning up dead shim" Feb 12 20:24:44.775025 env[1114]: time="2024-02-12T20:24:44.774981630Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2433 runtime=io.containerd.runc.v2\n" Feb 12 20:24:45.447655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c-rootfs.mount: Deactivated successfully. Feb 12 20:24:45.584017 kubelet[1994]: E0212 20:24:45.583988 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:45.588298 env[1114]: time="2024-02-12T20:24:45.588259424Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:24:45.604029 env[1114]: time="2024-02-12T20:24:45.603944756Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\"" Feb 12 20:24:45.604905 env[1114]: time="2024-02-12T20:24:45.604847780Z" level=info msg="StartContainer for \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\"" Feb 12 20:24:45.625856 systemd[1]: Started cri-containerd-d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae.scope. Feb 12 20:24:45.661254 env[1114]: time="2024-02-12T20:24:45.661197764Z" level=info msg="StartContainer for \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\" returns successfully" Feb 12 20:24:45.670761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:24:45.671023 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:24:45.671218 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:24:45.673235 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:45.676048 systemd[1]: cri-containerd-d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae.scope: Deactivated successfully. Feb 12 20:24:45.694045 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:45.709850 env[1114]: time="2024-02-12T20:24:45.709387859Z" level=info msg="shim disconnected" id=d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae Feb 12 20:24:45.709850 env[1114]: time="2024-02-12T20:24:45.709455697Z" level=warning msg="cleaning up after shim disconnected" id=d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae namespace=k8s.io Feb 12 20:24:45.709850 env[1114]: time="2024-02-12T20:24:45.709470024Z" level=info msg="cleaning up dead shim" Feb 12 20:24:45.722252 env[1114]: time="2024-02-12T20:24:45.722192865Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Feb 12 20:24:46.448303 systemd[1]: run-containerd-runc-k8s.io-d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae-runc.i8D1JI.mount: Deactivated successfully. Feb 12 20:24:46.448454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae-rootfs.mount: Deactivated successfully. Feb 12 20:24:46.465847 env[1114]: time="2024-02-12T20:24:46.465794943Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:46.467735 env[1114]: time="2024-02-12T20:24:46.467689338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:46.469319 env[1114]: time="2024-02-12T20:24:46.469292752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:46.469823 env[1114]: time="2024-02-12T20:24:46.469788597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:24:46.472671 env[1114]: time="2024-02-12T20:24:46.472642912Z" level=info msg="CreateContainer within sandbox \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:24:46.484027 env[1114]: time="2024-02-12T20:24:46.483977014Z" level=info msg="CreateContainer within sandbox \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\"" Feb 12 20:24:46.484473 env[1114]: time="2024-02-12T20:24:46.484445188Z" level=info msg="StartContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\"" Feb 12 20:24:46.502028 systemd[1]: Started cri-containerd-ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf.scope. Feb 12 20:24:46.524458 env[1114]: time="2024-02-12T20:24:46.524409219Z" level=info msg="StartContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" returns successfully" Feb 12 20:24:46.586729 kubelet[1994]: E0212 20:24:46.586689 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:46.588584 kubelet[1994]: E0212 20:24:46.588553 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:46.589976 env[1114]: time="2024-02-12T20:24:46.589942377Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:24:46.594335 kubelet[1994]: I0212 20:24:46.594296 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-x68jj" podStartSLOduration=-9.223372022260523e+09 pod.CreationTimestamp="2024-02-12 20:24:32 +0000 UTC" firstStartedPulling="2024-02-12 20:24:33.244818414 +0000 UTC m=+14.827169623" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:46.593779936 +0000 UTC m=+28.176131155" watchObservedRunningTime="2024-02-12 20:24:46.594252838 +0000 UTC m=+28.176604067" Feb 12 20:24:46.605023 env[1114]: time="2024-02-12T20:24:46.604954137Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\"" Feb 12 20:24:46.605588 env[1114]: time="2024-02-12T20:24:46.605547896Z" level=info msg="StartContainer for \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\"" Feb 12 20:24:46.628324 systemd[1]: Started cri-containerd-2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd.scope. Feb 12 20:24:46.661766 env[1114]: time="2024-02-12T20:24:46.661713232Z" level=info msg="StartContainer for \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\" returns successfully" Feb 12 20:24:46.667709 systemd[1]: cri-containerd-2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd.scope: Deactivated successfully. Feb 12 20:24:46.922003 env[1114]: time="2024-02-12T20:24:46.921955476Z" level=info msg="shim disconnected" id=2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd Feb 12 20:24:46.922003 env[1114]: time="2024-02-12T20:24:46.921995491Z" level=warning msg="cleaning up after shim disconnected" id=2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd namespace=k8s.io Feb 12 20:24:46.922003 env[1114]: time="2024-02-12T20:24:46.922004068Z" level=info msg="cleaning up dead shim" Feb 12 20:24:46.930859 env[1114]: time="2024-02-12T20:24:46.930796916Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n" Feb 12 20:24:47.447982 systemd[1]: run-containerd-runc-k8s.io-ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf-runc.g77G18.mount: Deactivated successfully. Feb 12 20:24:47.592514 kubelet[1994]: E0212 20:24:47.592381 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:47.592514 kubelet[1994]: E0212 20:24:47.592452 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:47.594185 env[1114]: time="2024-02-12T20:24:47.594124016Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:24:47.609452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190263675.mount: Deactivated successfully. Feb 12 20:24:47.611282 env[1114]: time="2024-02-12T20:24:47.611248360Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\"" Feb 12 20:24:47.611734 env[1114]: time="2024-02-12T20:24:47.611713216Z" level=info msg="StartContainer for \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\"" Feb 12 20:24:47.625086 systemd[1]: Started cri-containerd-4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22.scope. Feb 12 20:24:47.646914 systemd[1]: cri-containerd-4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22.scope: Deactivated successfully. Feb 12 20:24:47.648724 env[1114]: time="2024-02-12T20:24:47.648682023Z" level=info msg="StartContainer for \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\" returns successfully" Feb 12 20:24:47.665201 env[1114]: time="2024-02-12T20:24:47.665135090Z" level=info msg="shim disconnected" id=4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22 Feb 12 20:24:47.665201 env[1114]: time="2024-02-12T20:24:47.665200624Z" level=warning msg="cleaning up after shim disconnected" id=4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22 namespace=k8s.io Feb 12 20:24:47.665364 env[1114]: time="2024-02-12T20:24:47.665209010Z" level=info msg="cleaning up dead shim" Feb 12 20:24:47.670869 env[1114]: time="2024-02-12T20:24:47.670834308Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2640 runtime=io.containerd.runc.v2\n" Feb 12 20:24:48.447884 systemd[1]: run-containerd-runc-k8s.io-4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22-runc.5VrN9h.mount: Deactivated successfully. Feb 12 20:24:48.447968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22-rootfs.mount: Deactivated successfully. Feb 12 20:24:48.595386 kubelet[1994]: E0212 20:24:48.595280 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:48.597651 env[1114]: time="2024-02-12T20:24:48.597605844Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:24:48.616313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323015355.mount: Deactivated successfully. Feb 12 20:24:48.618607 env[1114]: time="2024-02-12T20:24:48.618567831Z" level=info msg="CreateContainer within sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\"" Feb 12 20:24:48.618923 env[1114]: time="2024-02-12T20:24:48.618900578Z" level=info msg="StartContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\"" Feb 12 20:24:48.633418 systemd[1]: Started cri-containerd-e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518.scope. Feb 12 20:24:48.659050 env[1114]: time="2024-02-12T20:24:48.658997439Z" level=info msg="StartContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" returns successfully" Feb 12 20:24:48.713232 kubelet[1994]: I0212 20:24:48.713092 1994 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:24:48.760226 kubelet[1994]: I0212 20:24:48.760177 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:48.760398 kubelet[1994]: I0212 20:24:48.760335 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:48.765171 systemd[1]: Created slice kubepods-burstable-podf6e4a0bf_328b_478c_96b1_b3a91aa7485a.slice. Feb 12 20:24:48.770362 systemd[1]: Created slice kubepods-burstable-pod0a41d39e_6dd5_443d_8377_e372da7a8887.slice. Feb 12 20:24:48.799599 kubelet[1994]: I0212 20:24:48.799505 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a41d39e-6dd5-443d-8377-e372da7a8887-config-volume\") pod \"coredns-787d4945fb-xslkq\" (UID: \"0a41d39e-6dd5-443d-8377-e372da7a8887\") " pod="kube-system/coredns-787d4945fb-xslkq" Feb 12 20:24:48.799599 kubelet[1994]: I0212 20:24:48.799605 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9wxd\" (UniqueName: \"kubernetes.io/projected/0a41d39e-6dd5-443d-8377-e372da7a8887-kube-api-access-l9wxd\") pod \"coredns-787d4945fb-xslkq\" (UID: \"0a41d39e-6dd5-443d-8377-e372da7a8887\") " pod="kube-system/coredns-787d4945fb-xslkq" Feb 12 20:24:48.799791 kubelet[1994]: I0212 20:24:48.799627 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxqp\" (UniqueName: \"kubernetes.io/projected/f6e4a0bf-328b-478c-96b1-b3a91aa7485a-kube-api-access-ppxqp\") pod \"coredns-787d4945fb-f6knf\" (UID: \"f6e4a0bf-328b-478c-96b1-b3a91aa7485a\") " pod="kube-system/coredns-787d4945fb-f6knf" Feb 12 20:24:48.799791 kubelet[1994]: I0212 20:24:48.799646 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e4a0bf-328b-478c-96b1-b3a91aa7485a-config-volume\") pod \"coredns-787d4945fb-f6knf\" (UID: \"f6e4a0bf-328b-478c-96b1-b3a91aa7485a\") " pod="kube-system/coredns-787d4945fb-f6knf" Feb 12 20:24:49.068317 kubelet[1994]: E0212 20:24:49.068287 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:49.068805 env[1114]: time="2024-02-12T20:24:49.068760478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f6knf,Uid:f6e4a0bf-328b-478c-96b1-b3a91aa7485a,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:49.073641 kubelet[1994]: E0212 20:24:49.073606 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:49.074010 env[1114]: time="2024-02-12T20:24:49.073965347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xslkq,Uid:0a41d39e-6dd5-443d-8377-e372da7a8887,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:49.603932 kubelet[1994]: E0212 20:24:49.600584 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:50.583856 systemd-networkd[1028]: cilium_host: Link UP Feb 12 20:24:50.583973 systemd-networkd[1028]: cilium_net: Link UP Feb 12 20:24:50.583976 systemd-networkd[1028]: cilium_net: Gained carrier Feb 12 20:24:50.584653 systemd-networkd[1028]: cilium_host: Gained carrier Feb 12 20:24:50.585747 systemd-networkd[1028]: cilium_host: Gained IPv6LL Feb 12 20:24:50.586242 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:24:50.601392 kubelet[1994]: E0212 20:24:50.601370 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:50.659386 systemd-networkd[1028]: cilium_vxlan: Link UP Feb 12 20:24:50.659394 systemd-networkd[1028]: cilium_vxlan: Gained carrier Feb 12 20:24:50.844178 kernel: NET: Registered PF_ALG protocol family Feb 12 20:24:51.233278 systemd-networkd[1028]: cilium_net: Gained IPv6LL Feb 12 20:24:51.374935 systemd-networkd[1028]: lxc_health: Link UP Feb 12 20:24:51.387508 systemd-networkd[1028]: lxc_health: Gained carrier Feb 12 20:24:51.388180 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:24:51.603438 kubelet[1994]: E0212 20:24:51.603169 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:51.604677 systemd-networkd[1028]: lxc567030cb0918: Link UP Feb 12 20:24:51.614172 kernel: eth0: renamed from tmp6b47d Feb 12 20:24:51.622913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:24:51.623032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc567030cb0918: link becomes ready Feb 12 20:24:51.623252 systemd-networkd[1028]: lxc567030cb0918: Gained carrier Feb 12 20:24:51.625072 systemd-networkd[1028]: lxce8541243eb8f: Link UP Feb 12 20:24:51.635178 kernel: eth0: renamed from tmpd0063 Feb 12 20:24:51.648939 systemd-networkd[1028]: lxce8541243eb8f: Gained carrier Feb 12 20:24:51.649277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce8541243eb8f: link becomes ready Feb 12 20:24:51.872574 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Feb 12 20:24:52.768484 kubelet[1994]: E0212 20:24:52.768422 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:52.780528 kubelet[1994]: I0212 20:24:52.780494 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2wngv" podStartSLOduration=-9.223372016074327e+09 pod.CreationTimestamp="2024-02-12 20:24:32 +0000 UTC" firstStartedPulling="2024-02-12 20:24:32.831997274 +0000 UTC m=+14.414348473" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:49.611921796 +0000 UTC m=+31.194273015" watchObservedRunningTime="2024-02-12 20:24:52.780449266 +0000 UTC m=+34.362800475" Feb 12 20:24:53.025275 systemd-networkd[1028]: lxc_health: Gained IPv6LL Feb 12 20:24:53.216349 systemd-networkd[1028]: lxce8541243eb8f: Gained IPv6LL Feb 12 20:24:53.408285 systemd-networkd[1028]: lxc567030cb0918: Gained IPv6LL Feb 12 20:24:53.606244 kubelet[1994]: E0212 20:24:53.606219 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:54.608124 kubelet[1994]: E0212 20:24:54.608098 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:55.322543 env[1114]: time="2024-02-12T20:24:55.322448041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:55.322870 env[1114]: time="2024-02-12T20:24:55.322524465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:55.322870 env[1114]: time="2024-02-12T20:24:55.322558098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:55.322999 env[1114]: time="2024-02-12T20:24:55.322921872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b47df507fb10346785a58b479c9a03b6c738a46b1a527884694166554d33cef pid=3206 runtime=io.containerd.runc.v2 Feb 12 20:24:55.329396 env[1114]: time="2024-02-12T20:24:55.329339579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:55.329538 env[1114]: time="2024-02-12T20:24:55.329380435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:55.329538 env[1114]: time="2024-02-12T20:24:55.329390315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:55.329653 env[1114]: time="2024-02-12T20:24:55.329601833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0063dc0f85b86146f41451723a6b95cb51c436cf20aeafae72f5401766731d5 pid=3232 runtime=io.containerd.runc.v2 Feb 12 20:24:55.337752 systemd[1]: Started cri-containerd-6b47df507fb10346785a58b479c9a03b6c738a46b1a527884694166554d33cef.scope. Feb 12 20:24:55.345328 systemd[1]: Started cri-containerd-d0063dc0f85b86146f41451723a6b95cb51c436cf20aeafae72f5401766731d5.scope. Feb 12 20:24:55.349311 systemd-resolved[1070]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:24:55.357048 systemd-resolved[1070]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:24:55.375453 env[1114]: time="2024-02-12T20:24:55.375382459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f6knf,Uid:f6e4a0bf-328b-478c-96b1-b3a91aa7485a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b47df507fb10346785a58b479c9a03b6c738a46b1a527884694166554d33cef\"" Feb 12 20:24:55.378571 kubelet[1994]: E0212 20:24:55.376458 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:55.382584 env[1114]: time="2024-02-12T20:24:55.382555358Z" level=info msg="CreateContainer within sandbox \"6b47df507fb10346785a58b479c9a03b6c738a46b1a527884694166554d33cef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:24:55.382994 env[1114]: time="2024-02-12T20:24:55.382774530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xslkq,Uid:0a41d39e-6dd5-443d-8377-e372da7a8887,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0063dc0f85b86146f41451723a6b95cb51c436cf20aeafae72f5401766731d5\"" Feb 12 20:24:55.385166 kubelet[1994]: E0212 20:24:55.383762 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:55.388280 env[1114]: time="2024-02-12T20:24:55.387518287Z" level=info msg="CreateContainer within sandbox \"d0063dc0f85b86146f41451723a6b95cb51c436cf20aeafae72f5401766731d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:24:55.408458 env[1114]: time="2024-02-12T20:24:55.408396885Z" level=info msg="CreateContainer within sandbox \"6b47df507fb10346785a58b479c9a03b6c738a46b1a527884694166554d33cef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0861706c72ba4e347c559d0728a336bb21e5808b93ea6a22471ae219c21d66e\"" Feb 12 20:24:55.409049 env[1114]: time="2024-02-12T20:24:55.409018334Z" level=info msg="StartContainer for \"e0861706c72ba4e347c559d0728a336bb21e5808b93ea6a22471ae219c21d66e\"" Feb 12 20:24:55.411648 env[1114]: time="2024-02-12T20:24:55.411602276Z" level=info msg="CreateContainer within sandbox \"d0063dc0f85b86146f41451723a6b95cb51c436cf20aeafae72f5401766731d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"680f50b9512a3cc28f52ee3387acd1412581e4ae4df356b81e10e6397ebc7c9e\"" Feb 12 20:24:55.412923 env[1114]: time="2024-02-12T20:24:55.412132944Z" level=info msg="StartContainer for \"680f50b9512a3cc28f52ee3387acd1412581e4ae4df356b81e10e6397ebc7c9e\"" Feb 12 20:24:55.427593 systemd[1]: Started cri-containerd-e0861706c72ba4e347c559d0728a336bb21e5808b93ea6a22471ae219c21d66e.scope. Feb 12 20:24:55.436837 systemd[1]: Started cri-containerd-680f50b9512a3cc28f52ee3387acd1412581e4ae4df356b81e10e6397ebc7c9e.scope. Feb 12 20:24:55.458914 env[1114]: time="2024-02-12T20:24:55.458869361Z" level=info msg="StartContainer for \"e0861706c72ba4e347c559d0728a336bb21e5808b93ea6a22471ae219c21d66e\" returns successfully" Feb 12 20:24:55.464745 env[1114]: time="2024-02-12T20:24:55.464690495Z" level=info msg="StartContainer for \"680f50b9512a3cc28f52ee3387acd1412581e4ae4df356b81e10e6397ebc7c9e\" returns successfully" Feb 12 20:24:55.611825 kubelet[1994]: E0212 20:24:55.611580 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:55.613436 kubelet[1994]: E0212 20:24:55.613338 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:55.626376 kubelet[1994]: I0212 20:24:55.626341 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xslkq" podStartSLOduration=23.62629369 pod.CreationTimestamp="2024-02-12 20:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:55.625822834 +0000 UTC m=+37.208174053" watchObservedRunningTime="2024-02-12 20:24:55.62629369 +0000 UTC m=+37.208644899" Feb 12 20:24:56.412625 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:58636.service. Feb 12 20:24:56.448935 sshd[3416]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:24:56.450238 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.453912 systemd-logind[1107]: New session 6 of user core. Feb 12 20:24:56.454648 systemd[1]: Started session-6.scope. Feb 12 20:24:56.580009 sshd[3416]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.582649 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:58636.service: Deactivated successfully. Feb 12 20:24:56.583392 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:24:56.584070 systemd-logind[1107]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:24:56.584857 systemd-logind[1107]: Removed session 6. Feb 12 20:24:56.615158 kubelet[1994]: E0212 20:24:56.615121 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:56.615644 kubelet[1994]: E0212 20:24:56.615580 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:56.626206 kubelet[1994]: I0212 20:24:56.626168 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-f6knf" podStartSLOduration=24.6261147 pod.CreationTimestamp="2024-02-12 20:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:55.63637996 +0000 UTC m=+37.218731169" watchObservedRunningTime="2024-02-12 20:24:56.6261147 +0000 UTC m=+38.208465909" Feb 12 20:24:57.616791 kubelet[1994]: E0212 20:24:57.616768 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:24:57.617191 kubelet[1994]: E0212 20:24:57.616850 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:01.583973 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:58650.service. Feb 12 20:25:01.616002 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 58650 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:01.617324 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:01.620835 systemd-logind[1107]: New session 7 of user core. Feb 12 20:25:01.621891 systemd[1]: Started session-7.scope. Feb 12 20:25:01.723278 sshd[3486]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:01.725639 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:58650.service: Deactivated successfully. Feb 12 20:25:01.726386 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:25:01.727195 systemd-logind[1107]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:25:01.727915 systemd-logind[1107]: Removed session 7. Feb 12 20:25:06.727133 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:60946.service. Feb 12 20:25:06.761323 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 60946 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:06.762449 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:06.765409 systemd-logind[1107]: New session 8 of user core. Feb 12 20:25:06.766105 systemd[1]: Started session-8.scope. Feb 12 20:25:06.921707 sshd[3503]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:06.923937 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:60946.service: Deactivated successfully. Feb 12 20:25:06.924581 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:25:06.925037 systemd-logind[1107]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:25:06.925613 systemd-logind[1107]: Removed session 8. Feb 12 20:25:11.925976 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:60952.service. Feb 12 20:25:11.966380 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 60952 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:11.967472 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:11.970677 systemd-logind[1107]: New session 9 of user core. Feb 12 20:25:11.971481 systemd[1]: Started session-9.scope. Feb 12 20:25:12.078702 sshd[3518]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:12.080895 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:60952.service: Deactivated successfully. Feb 12 20:25:12.081618 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:25:12.082370 systemd-logind[1107]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:25:12.083013 systemd-logind[1107]: Removed session 9. Feb 12 20:25:17.083454 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:47212.service. Feb 12 20:25:17.115403 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 47212 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:17.116386 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:17.119222 systemd-logind[1107]: New session 10 of user core. Feb 12 20:25:17.120002 systemd[1]: Started session-10.scope. Feb 12 20:25:17.228821 sshd[3532]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:17.232428 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:47212.service: Deactivated successfully. Feb 12 20:25:17.233102 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:25:17.233687 systemd-logind[1107]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:25:17.235032 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:47224.service. Feb 12 20:25:17.235824 systemd-logind[1107]: Removed session 10. Feb 12 20:25:17.267163 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:17.268368 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:17.271854 systemd-logind[1107]: New session 11 of user core. Feb 12 20:25:17.272788 systemd[1]: Started session-11.scope. Feb 12 20:25:17.966625 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:17.971345 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:47232.service. Feb 12 20:25:17.974175 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:47224.service: Deactivated successfully. Feb 12 20:25:17.974776 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:25:17.977540 systemd-logind[1107]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:25:17.978988 systemd-logind[1107]: Removed session 11. Feb 12 20:25:18.007566 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 47232 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:18.008566 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:18.011653 systemd-logind[1107]: New session 12 of user core. Feb 12 20:25:18.012615 systemd[1]: Started session-12.scope. Feb 12 20:25:18.113430 sshd[3558]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:18.115418 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:47232.service: Deactivated successfully. Feb 12 20:25:18.116232 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:25:18.116769 systemd-logind[1107]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:25:18.117399 systemd-logind[1107]: Removed session 12. Feb 12 20:25:23.117461 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:47242.service. Feb 12 20:25:23.149157 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:23.150034 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:23.152973 systemd-logind[1107]: New session 13 of user core. Feb 12 20:25:23.153833 systemd[1]: Started session-13.scope. Feb 12 20:25:23.251323 sshd[3574]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:23.253258 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:47242.service: Deactivated successfully. Feb 12 20:25:23.254012 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:25:23.254629 systemd-logind[1107]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:25:23.255316 systemd-logind[1107]: Removed session 13. Feb 12 20:25:28.256703 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:50668.service. Feb 12 20:25:28.289660 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 50668 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:28.290899 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.294705 systemd-logind[1107]: New session 14 of user core. Feb 12 20:25:28.295742 systemd[1]: Started session-14.scope. Feb 12 20:25:28.402603 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:28.405761 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:50668.service: Deactivated successfully. Feb 12 20:25:28.406420 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:25:28.406950 systemd-logind[1107]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:25:28.408099 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:50676.service. Feb 12 20:25:28.408816 systemd-logind[1107]: Removed session 14. Feb 12 20:25:28.440637 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 50676 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:28.441747 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.445192 systemd-logind[1107]: New session 15 of user core. Feb 12 20:25:28.446242 systemd[1]: Started session-15.scope. Feb 12 20:25:28.611434 sshd[3600]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:28.614739 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:50680.service. Feb 12 20:25:28.616110 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:50676.service: Deactivated successfully. Feb 12 20:25:28.616691 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:25:28.617269 systemd-logind[1107]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:25:28.618218 systemd-logind[1107]: Removed session 15. Feb 12 20:25:28.652625 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 50680 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:28.654088 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.658440 systemd-logind[1107]: New session 16 of user core. Feb 12 20:25:28.658711 systemd[1]: Started session-16.scope. Feb 12 20:25:29.589809 sshd[3611]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:29.593007 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:50692.service. Feb 12 20:25:29.593658 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:50680.service: Deactivated successfully. Feb 12 20:25:29.594461 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:25:29.595128 systemd-logind[1107]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:25:29.596293 systemd-logind[1107]: Removed session 16. Feb 12 20:25:29.628992 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 50692 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:29.630598 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:29.635382 systemd[1]: Started session-17.scope. Feb 12 20:25:29.635680 systemd-logind[1107]: New session 17 of user core. Feb 12 20:25:29.846209 sshd[3646]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:29.849500 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:50704.service. Feb 12 20:25:29.849953 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:50692.service: Deactivated successfully. Feb 12 20:25:29.853270 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:25:29.854202 systemd-logind[1107]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:25:29.855036 systemd-logind[1107]: Removed session 17. Feb 12 20:25:29.885387 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 50704 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:29.886386 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:29.889564 systemd-logind[1107]: New session 18 of user core. Feb 12 20:25:29.890522 systemd[1]: Started session-18.scope. Feb 12 20:25:29.992284 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:29.994629 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:50704.service: Deactivated successfully. Feb 12 20:25:29.995290 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:25:29.995794 systemd-logind[1107]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:25:29.996485 systemd-logind[1107]: Removed session 18. Feb 12 20:25:32.537723 kubelet[1994]: E0212 20:25:32.537690 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:34.996462 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:60654.service. Feb 12 20:25:35.029930 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 60654 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:35.031018 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:35.034905 systemd-logind[1107]: New session 19 of user core. Feb 12 20:25:35.036098 systemd[1]: Started session-19.scope. Feb 12 20:25:35.141692 sshd[3707]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:35.144103 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:60654.service: Deactivated successfully. Feb 12 20:25:35.144876 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:25:35.145419 systemd-logind[1107]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:25:35.146031 systemd-logind[1107]: Removed session 19. Feb 12 20:25:40.145840 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:60666.service. Feb 12 20:25:40.179931 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:40.180970 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:40.184425 systemd-logind[1107]: New session 20 of user core. Feb 12 20:25:40.185283 systemd[1]: Started session-20.scope. Feb 12 20:25:40.284313 sshd[3747]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:40.286482 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:60666.service: Deactivated successfully. Feb 12 20:25:40.287205 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:25:40.287675 systemd-logind[1107]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:25:40.288319 systemd-logind[1107]: Removed session 20. Feb 12 20:25:40.537282 kubelet[1994]: E0212 20:25:40.537231 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:42.537352 kubelet[1994]: E0212 20:25:42.537312 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:45.288576 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:48742.service. Feb 12 20:25:45.320422 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:45.321309 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:45.324048 systemd-logind[1107]: New session 21 of user core. Feb 12 20:25:45.324771 systemd[1]: Started session-21.scope. Feb 12 20:25:45.416801 sshd[3761]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:45.418556 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:48742.service: Deactivated successfully. Feb 12 20:25:45.419295 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:25:45.419728 systemd-logind[1107]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:25:45.420365 systemd-logind[1107]: Removed session 21. Feb 12 20:25:50.421466 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:48750.service. Feb 12 20:25:50.453439 sshd[3774]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:50.501381 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:50.504418 systemd-logind[1107]: New session 22 of user core. Feb 12 20:25:50.505293 systemd[1]: Started session-22.scope. Feb 12 20:25:50.735933 sshd[3774]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:50.739799 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:48766.service. Feb 12 20:25:50.740536 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:48750.service: Deactivated successfully. Feb 12 20:25:50.741239 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:25:50.747570 systemd-logind[1107]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:25:50.748442 systemd-logind[1107]: Removed session 22. Feb 12 20:25:50.772932 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:50.774121 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:50.777516 systemd-logind[1107]: New session 23 of user core. Feb 12 20:25:50.778490 systemd[1]: Started session-23.scope. Feb 12 20:25:52.296243 env[1114]: time="2024-02-12T20:25:52.295247860Z" level=info msg="StopContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" with timeout 30 (s)" Feb 12 20:25:52.296243 env[1114]: time="2024-02-12T20:25:52.295654987Z" level=info msg="Stop container \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" with signal terminated" Feb 12 20:25:52.304999 systemd[1]: cri-containerd-ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf.scope: Deactivated successfully. Feb 12 20:25:52.308867 env[1114]: time="2024-02-12T20:25:52.308808780Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:25:52.314286 env[1114]: time="2024-02-12T20:25:52.314237924Z" level=info msg="StopContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" with timeout 1 (s)" Feb 12 20:25:52.314483 env[1114]: time="2024-02-12T20:25:52.314458134Z" level=info msg="Stop container \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" with signal terminated" Feb 12 20:25:52.320184 systemd-networkd[1028]: lxc_health: Link DOWN Feb 12 20:25:52.320190 systemd-networkd[1028]: lxc_health: Lost carrier Feb 12 20:25:52.320967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf-rootfs.mount: Deactivated successfully. Feb 12 20:25:52.355433 systemd[1]: cri-containerd-e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518.scope: Deactivated successfully. Feb 12 20:25:52.355653 systemd[1]: cri-containerd-e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518.scope: Consumed 6.606s CPU time. Feb 12 20:25:52.369267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518-rootfs.mount: Deactivated successfully. Feb 12 20:25:52.468673 env[1114]: time="2024-02-12T20:25:52.468620191Z" level=info msg="shim disconnected" id=ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf Feb 12 20:25:52.468673 env[1114]: time="2024-02-12T20:25:52.468668283Z" level=warning msg="cleaning up after shim disconnected" id=ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf namespace=k8s.io Feb 12 20:25:52.468673 env[1114]: time="2024-02-12T20:25:52.468676459Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.468899 env[1114]: time="2024-02-12T20:25:52.468654627Z" level=info msg="shim disconnected" id=e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518 Feb 12 20:25:52.468899 env[1114]: time="2024-02-12T20:25:52.468803311Z" level=warning msg="cleaning up after shim disconnected" id=e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518 namespace=k8s.io Feb 12 20:25:52.468899 env[1114]: time="2024-02-12T20:25:52.468811226Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.474741 env[1114]: time="2024-02-12T20:25:52.474703472Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3858 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.474944 env[1114]: time="2024-02-12T20:25:52.474920226Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.529696 env[1114]: time="2024-02-12T20:25:52.529659044Z" level=info msg="StopContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" returns successfully" Feb 12 20:25:52.530277 env[1114]: time="2024-02-12T20:25:52.530256072Z" level=info msg="StopPodSandbox for \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\"" Feb 12 20:25:52.530336 env[1114]: time="2024-02-12T20:25:52.530306718Z" level=info msg="Container to stop \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.531793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c-shm.mount: Deactivated successfully. Feb 12 20:25:52.536456 systemd[1]: cri-containerd-eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c.scope: Deactivated successfully. Feb 12 20:25:52.553979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c-rootfs.mount: Deactivated successfully. Feb 12 20:25:52.603399 env[1114]: time="2024-02-12T20:25:52.603354031Z" level=info msg="StopContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" returns successfully" Feb 12 20:25:52.603782 env[1114]: time="2024-02-12T20:25:52.603760836Z" level=info msg="StopPodSandbox for \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\"" Feb 12 20:25:52.603831 env[1114]: time="2024-02-12T20:25:52.603813837Z" level=info msg="Container to stop \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.603831 env[1114]: time="2024-02-12T20:25:52.603825690Z" level=info msg="Container to stop \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.603880 env[1114]: time="2024-02-12T20:25:52.603834728Z" level=info msg="Container to stop \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.603880 env[1114]: time="2024-02-12T20:25:52.603844195Z" level=info msg="Container to stop \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.603880 env[1114]: time="2024-02-12T20:25:52.603853182Z" level=info msg="Container to stop \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:52.608055 systemd[1]: cri-containerd-988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa.scope: Deactivated successfully. Feb 12 20:25:52.673299 env[1114]: time="2024-02-12T20:25:52.673256543Z" level=info msg="shim disconnected" id=eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c Feb 12 20:25:52.673299 env[1114]: time="2024-02-12T20:25:52.673297080Z" level=warning msg="cleaning up after shim disconnected" id=eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c namespace=k8s.io Feb 12 20:25:52.673299 env[1114]: time="2024-02-12T20:25:52.673304975Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.678794 env[1114]: time="2024-02-12T20:25:52.678760861Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.679060 env[1114]: time="2024-02-12T20:25:52.679034703Z" level=info msg="TearDown network for sandbox \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\" successfully" Feb 12 20:25:52.679060 env[1114]: time="2024-02-12T20:25:52.679056915Z" level=info msg="StopPodSandbox for \"eb1318bb50d801c77e8abff5d94f1ee86e125da7cbaeb8bb367ee3b34910266c\" returns successfully" Feb 12 20:25:52.680764 env[1114]: time="2024-02-12T20:25:52.680395255Z" level=info msg="shim disconnected" id=988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa Feb 12 20:25:52.680764 env[1114]: time="2024-02-12T20:25:52.680424150Z" level=warning msg="cleaning up after shim disconnected" id=988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa namespace=k8s.io Feb 12 20:25:52.680764 env[1114]: time="2024-02-12T20:25:52.680432115Z" level=info msg="cleaning up dead shim" Feb 12 20:25:52.686583 env[1114]: time="2024-02-12T20:25:52.686547007Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3933 runtime=io.containerd.runc.v2\n" Feb 12 20:25:52.686820 env[1114]: time="2024-02-12T20:25:52.686800570Z" level=info msg="TearDown network for sandbox \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" successfully" Feb 12 20:25:52.686848 env[1114]: time="2024-02-12T20:25:52.686820177Z" level=info msg="StopPodSandbox for \"988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa\" returns successfully" Feb 12 20:25:52.703254 kubelet[1994]: I0212 20:25:52.703235 1994 scope.go:115] "RemoveContainer" containerID="ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf" Feb 12 20:25:52.704244 env[1114]: time="2024-02-12T20:25:52.704196656Z" level=info msg="RemoveContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\"" Feb 12 20:25:52.706932 env[1114]: time="2024-02-12T20:25:52.706899927Z" level=info msg="RemoveContainer for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" returns successfully" Feb 12 20:25:52.709154 kubelet[1994]: I0212 20:25:52.709099 1994 scope.go:115] "RemoveContainer" containerID="ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf" Feb 12 20:25:52.709382 env[1114]: time="2024-02-12T20:25:52.709304308Z" level=error msg="ContainerStatus for \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\": not found" Feb 12 20:25:52.710511 kubelet[1994]: E0212 20:25:52.709580 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\": not found" containerID="ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf" Feb 12 20:25:52.710511 kubelet[1994]: I0212 20:25:52.709626 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf} err="failed to get container status \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee8a40b4ee21a9fcbd60b4e4904e59f5880414e8a41d1b5343a72b1e48638acf\": not found" Feb 12 20:25:52.710511 kubelet[1994]: I0212 20:25:52.710083 1994 scope.go:115] "RemoveContainer" containerID="e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518" Feb 12 20:25:52.711370 env[1114]: time="2024-02-12T20:25:52.711338233Z" level=info msg="RemoveContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\"" Feb 12 20:25:52.714094 env[1114]: time="2024-02-12T20:25:52.714047787Z" level=info msg="RemoveContainer for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" returns successfully" Feb 12 20:25:52.714290 kubelet[1994]: I0212 20:25:52.714163 1994 scope.go:115] "RemoveContainer" containerID="4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22" Feb 12 20:25:52.714889 env[1114]: time="2024-02-12T20:25:52.714840687Z" level=info msg="RemoveContainer for \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\"" Feb 12 20:25:52.718183 env[1114]: time="2024-02-12T20:25:52.718133110Z" level=info msg="RemoveContainer for \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\" returns successfully" Feb 12 20:25:52.718350 kubelet[1994]: I0212 20:25:52.718278 1994 scope.go:115] "RemoveContainer" containerID="2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd" Feb 12 20:25:52.719558 env[1114]: time="2024-02-12T20:25:52.719527467Z" level=info msg="RemoveContainer for \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\"" Feb 12 20:25:52.723114 env[1114]: time="2024-02-12T20:25:52.723055279Z" level=info msg="RemoveContainer for \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\" returns successfully" Feb 12 20:25:52.723325 kubelet[1994]: I0212 20:25:52.723307 1994 scope.go:115] "RemoveContainer" containerID="d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae" Feb 12 20:25:52.724132 env[1114]: time="2024-02-12T20:25:52.724097374Z" level=info msg="RemoveContainer for \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\"" Feb 12 20:25:52.727053 env[1114]: time="2024-02-12T20:25:52.727024401Z" level=info msg="RemoveContainer for \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\" returns successfully" Feb 12 20:25:52.727150 kubelet[1994]: I0212 20:25:52.727126 1994 scope.go:115] "RemoveContainer" containerID="f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c" Feb 12 20:25:52.727875 env[1114]: time="2024-02-12T20:25:52.727854363Z" level=info msg="RemoveContainer for \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\"" Feb 12 20:25:52.730787 env[1114]: time="2024-02-12T20:25:52.730748658Z" level=info msg="RemoveContainer for \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\" returns successfully" Feb 12 20:25:52.730883 kubelet[1994]: I0212 20:25:52.730858 1994 scope.go:115] "RemoveContainer" containerID="e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518" Feb 12 20:25:52.731049 env[1114]: time="2024-02-12T20:25:52.730996810Z" level=error msg="ContainerStatus for \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\": not found" Feb 12 20:25:52.731149 kubelet[1994]: E0212 20:25:52.731118 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\": not found" containerID="e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518" Feb 12 20:25:52.731183 kubelet[1994]: I0212 20:25:52.731159 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518} err="failed to get container status \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\": rpc error: code = NotFound desc = an error occurred when try to find container \"e74a2607b2ff857e5d7ae447032065583a8fc2ebd7828214f54cc2c994334518\": not found" Feb 12 20:25:52.731183 kubelet[1994]: I0212 20:25:52.731168 1994 scope.go:115] "RemoveContainer" containerID="4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22" Feb 12 20:25:52.731318 env[1114]: time="2024-02-12T20:25:52.731281724Z" level=error msg="ContainerStatus for \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\": not found" Feb 12 20:25:52.731438 kubelet[1994]: E0212 20:25:52.731419 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\": not found" containerID="4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22" Feb 12 20:25:52.731478 kubelet[1994]: I0212 20:25:52.731455 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22} err="failed to get container status \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\": rpc error: code = NotFound desc = an error occurred when try to find container \"4df9bb96c656115e4d4b3a579e04abba31035de77877e2ed1c0e93c088fe2d22\": not found" Feb 12 20:25:52.731478 kubelet[1994]: I0212 20:25:52.731469 1994 scope.go:115] "RemoveContainer" containerID="2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd" Feb 12 20:25:52.731628 env[1114]: time="2024-02-12T20:25:52.731591504Z" level=error msg="ContainerStatus for \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\": not found" Feb 12 20:25:52.731702 kubelet[1994]: E0212 20:25:52.731686 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\": not found" containerID="2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd" Feb 12 20:25:52.731749 kubelet[1994]: I0212 20:25:52.731712 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd} err="failed to get container status \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c24778d14bbf39f3b355022dca064a56addd4f6e11a44da7b048ddcd42191cd\": not found" Feb 12 20:25:52.731749 kubelet[1994]: I0212 20:25:52.731719 1994 scope.go:115] "RemoveContainer" containerID="d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae" Feb 12 20:25:52.731879 env[1114]: time="2024-02-12T20:25:52.731844486Z" level=error msg="ContainerStatus for \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\": not found" Feb 12 20:25:52.732003 kubelet[1994]: E0212 20:25:52.731991 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\": not found" containerID="d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae" Feb 12 20:25:52.732052 kubelet[1994]: I0212 20:25:52.732009 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae} err="failed to get container status \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0f008088308e7398a6d38d5273d945cf5164f3f8906564b1782bd220be19dae\": not found" Feb 12 20:25:52.732052 kubelet[1994]: I0212 20:25:52.732016 1994 scope.go:115] "RemoveContainer" containerID="f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c" Feb 12 20:25:52.732287 env[1114]: time="2024-02-12T20:25:52.732228768Z" level=error msg="ContainerStatus for \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\": not found" Feb 12 20:25:52.732377 kubelet[1994]: E0212 20:25:52.732367 1994 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\": not found" containerID="f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c" Feb 12 20:25:52.732405 kubelet[1994]: I0212 20:25:52.732394 1994 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c} err="failed to get container status \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f931052d6e2d23f71e6af7ccc627408e92c3cb2caf88bfde48c3fc954fb7878c\": not found" Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.833992 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-net\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.834104 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-lib-modules\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.834136 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hubble-tls\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.834701 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2bwp\" (UniqueName: \"kubernetes.io/projected/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-kube-api-access-h2bwp\") pod \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\" (UID: \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\") " Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.834730 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-kernel\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836413 kubelet[1994]: I0212 20:25:52.834751 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-etc-cni-netd\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834777 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-cilium-config-path\") pod \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\" (UID: \"e2e45038-92a3-42ab-bd1e-d9bc6c3f598d\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834796 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-xtables-lock\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834818 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-run\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834840 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-clustermesh-secrets\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834879 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hostproc\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836685 kubelet[1994]: I0212 20:25:52.834907 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kpf7\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-kube-api-access-8kpf7\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.834928 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-config-path\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.834952 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-cgroup\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.834948 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.834974 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cni-path\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.835043 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cni-path" (OuterVolumeSpecName: "cni-path") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.836856 kubelet[1994]: I0212 20:25:52.835078 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-bpf-maps\") pod \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\" (UID: \"b211d34d-d3bd-4b59-ac5c-e0c2e9372837\") " Feb 12 20:25:52.836995 kubelet[1994]: I0212 20:25:52.835132 1994 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.836995 kubelet[1994]: I0212 20:25:52.835163 1994 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.836995 kubelet[1994]: I0212 20:25:52.835182 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.836995 kubelet[1994]: I0212 20:25:52.834189 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.836995 kubelet[1994]: W0212 20:25:52.835302 1994 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:25:52.836995 kubelet[1994]: I0212 20:25:52.835515 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.837126 kubelet[1994]: I0212 20:25:52.834240 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.837126 kubelet[1994]: I0212 20:25:52.835548 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hostproc" (OuterVolumeSpecName: "hostproc") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.837126 kubelet[1994]: I0212 20:25:52.835564 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.837126 kubelet[1994]: I0212 20:25:52.835581 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.837126 kubelet[1994]: W0212 20:25:52.835848 1994 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b211d34d-d3bd-4b59-ac5c-e0c2e9372837/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:25:52.837784 kubelet[1994]: I0212 20:25:52.837755 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2e45038-92a3-42ab-bd1e-d9bc6c3f598d" (UID: "e2e45038-92a3-42ab-bd1e-d9bc6c3f598d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:25:52.837817 kubelet[1994]: I0212 20:25:52.837798 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:52.838310 kubelet[1994]: I0212 20:25:52.838281 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:25:52.838529 kubelet[1994]: I0212 20:25:52.838387 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:52.840388 kubelet[1994]: I0212 20:25:52.840339 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-kube-api-access-h2bwp" (OuterVolumeSpecName: "kube-api-access-h2bwp") pod "e2e45038-92a3-42ab-bd1e-d9bc6c3f598d" (UID: "e2e45038-92a3-42ab-bd1e-d9bc6c3f598d"). InnerVolumeSpecName "kube-api-access-h2bwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:52.840436 kubelet[1994]: I0212 20:25:52.840389 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-kube-api-access-8kpf7" (OuterVolumeSpecName: "kube-api-access-8kpf7") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "kube-api-access-8kpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:52.841422 kubelet[1994]: I0212 20:25:52.841397 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b211d34d-d3bd-4b59-ac5c-e0c2e9372837" (UID: "b211d34d-d3bd-4b59-ac5c-e0c2e9372837"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935816 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935855 1994 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935865 1994 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935873 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935883 1994 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.935873 kubelet[1994]: I0212 20:25:52.935891 1994 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935902 1994 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8kpf7\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-kube-api-access-8kpf7\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935911 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935936 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935971 1994 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935980 1994 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-h2bwp\" (UniqueName: \"kubernetes.io/projected/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d-kube-api-access-h2bwp\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935988 1994 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.935996 1994 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:52.936231 kubelet[1994]: I0212 20:25:52.936004 1994 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b211d34d-d3bd-4b59-ac5c-e0c2e9372837-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:53.007015 systemd[1]: Removed slice kubepods-besteffort-pode2e45038_92a3_42ab_bd1e_d9bc6c3f598d.slice. Feb 12 20:25:53.013819 systemd[1]: Removed slice kubepods-burstable-podb211d34d_d3bd_4b59_ac5c_e0c2e9372837.slice. Feb 12 20:25:53.013906 systemd[1]: kubepods-burstable-podb211d34d_d3bd_4b59_ac5c_e0c2e9372837.slice: Consumed 6.697s CPU time. Feb 12 20:25:53.286753 systemd[1]: var-lib-kubelet-pods-e2e45038\x2d92a3\x2d42ab\x2dbd1e\x2dd9bc6c3f598d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2bwp.mount: Deactivated successfully. Feb 12 20:25:53.286852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa-rootfs.mount: Deactivated successfully. Feb 12 20:25:53.286899 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-988e391540f0e3b1c224528b255868a096d883786e62107d674d205e7557ddaa-shm.mount: Deactivated successfully. Feb 12 20:25:53.286955 systemd[1]: var-lib-kubelet-pods-b211d34d\x2dd3bd\x2d4b59\x2dac5c\x2de0c2e9372837-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8kpf7.mount: Deactivated successfully. Feb 12 20:25:53.287005 systemd[1]: var-lib-kubelet-pods-b211d34d\x2dd3bd\x2d4b59\x2dac5c\x2de0c2e9372837-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:53.287054 systemd[1]: var-lib-kubelet-pods-b211d34d\x2dd3bd\x2d4b59\x2dac5c\x2de0c2e9372837-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:25:53.574707 kubelet[1994]: E0212 20:25:53.574584 1994 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:25:54.217349 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:54.220354 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:48766.service: Deactivated successfully. Feb 12 20:25:54.220941 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:25:54.221502 systemd-logind[1107]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:25:54.222616 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:48782.service. Feb 12 20:25:54.223325 systemd-logind[1107]: Removed session 23. Feb 12 20:25:54.257267 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 48782 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:54.258089 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:54.261124 systemd-logind[1107]: New session 24 of user core. Feb 12 20:25:54.261961 systemd[1]: Started session-24.scope. Feb 12 20:25:54.553075 kubelet[1994]: I0212 20:25:54.553007 1994 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b211d34d-d3bd-4b59-ac5c-e0c2e9372837 path="/var/lib/kubelet/pods/b211d34d-d3bd-4b59-ac5c-e0c2e9372837/volumes" Feb 12 20:25:54.553649 kubelet[1994]: I0212 20:25:54.553616 1994 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e2e45038-92a3-42ab-bd1e-d9bc6c3f598d path="/var/lib/kubelet/pods/e2e45038-92a3-42ab-bd1e-d9bc6c3f598d/volumes" Feb 12 20:25:55.421369 sshd[3951]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:55.423857 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:48782.service: Deactivated successfully. Feb 12 20:25:55.424516 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:25:55.425218 systemd-logind[1107]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:25:55.426783 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:58450.service. Feb 12 20:25:55.427615 systemd-logind[1107]: Removed session 24. Feb 12 20:25:55.462609 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:55.463843 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:55.467157 systemd-logind[1107]: New session 25 of user core. Feb 12 20:25:55.467962 systemd[1]: Started session-25.scope. Feb 12 20:25:55.631207 kubelet[1994]: I0212 20:25:55.631174 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:55.631631 kubelet[1994]: E0212 20:25:55.631617 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="apply-sysctl-overwrites" Feb 12 20:25:55.631705 kubelet[1994]: E0212 20:25:55.631691 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2e45038-92a3-42ab-bd1e-d9bc6c3f598d" containerName="cilium-operator" Feb 12 20:25:55.631808 kubelet[1994]: E0212 20:25:55.631792 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="cilium-agent" Feb 12 20:25:55.631877 kubelet[1994]: E0212 20:25:55.631864 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="mount-cgroup" Feb 12 20:25:55.631951 kubelet[1994]: E0212 20:25:55.631938 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="mount-bpf-fs" Feb 12 20:25:55.632025 kubelet[1994]: E0212 20:25:55.632011 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="clean-cilium-state" Feb 12 20:25:55.632118 kubelet[1994]: I0212 20:25:55.632104 1994 memory_manager.go:346] "RemoveStaleState removing state" podUID="e2e45038-92a3-42ab-bd1e-d9bc6c3f598d" containerName="cilium-operator" Feb 12 20:25:55.632211 kubelet[1994]: I0212 20:25:55.632197 1994 memory_manager.go:346] "RemoveStaleState removing state" podUID="b211d34d-d3bd-4b59-ac5c-e0c2e9372837" containerName="cilium-agent" Feb 12 20:25:55.637178 systemd[1]: Created slice kubepods-burstable-podd45ab8d0_7d84_4433_8984_d89d3efd0ca2.slice. Feb 12 20:25:55.763493 kubelet[1994]: I0212 20:25:55.763365 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-cgroup\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763642 kubelet[1994]: I0212 20:25:55.763518 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-run\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763642 kubelet[1994]: I0212 20:25:55.763553 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hostproc\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763642 kubelet[1994]: I0212 20:25:55.763571 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cni-path\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763642 kubelet[1994]: I0212 20:25:55.763590 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-net\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763742 kubelet[1994]: I0212 20:25:55.763683 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-lib-modules\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763742 kubelet[1994]: I0212 20:25:55.763711 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7f89\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-kube-api-access-f7f89\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763742 kubelet[1994]: I0212 20:25:55.763741 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-xtables-lock\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763810 kubelet[1994]: I0212 20:25:55.763766 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-clustermesh-secrets\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763810 kubelet[1994]: I0212 20:25:55.763785 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-config-path\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763810 kubelet[1994]: I0212 20:25:55.763806 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-kernel\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763882 kubelet[1994]: I0212 20:25:55.763826 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-bpf-maps\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763882 kubelet[1994]: I0212 20:25:55.763853 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-etc-cni-netd\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.763882 kubelet[1994]: I0212 20:25:55.763872 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-ipsec-secrets\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.764055 kubelet[1994]: I0212 20:25:55.763889 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hubble-tls\") pod \"cilium-s5tfx\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " pod="kube-system/cilium-s5tfx" Feb 12 20:25:55.815888 sshd[3965]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:55.819318 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:58450.service: Deactivated successfully. Feb 12 20:25:55.820033 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 20:25:55.820656 systemd-logind[1107]: Session 25 logged out. Waiting for processes to exit. Feb 12 20:25:55.821987 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:58462.service. Feb 12 20:25:55.823578 systemd-logind[1107]: Removed session 25. Feb 12 20:25:55.855269 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 58462 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:55.856463 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:55.860070 systemd-logind[1107]: New session 26 of user core. Feb 12 20:25:55.861074 systemd[1]: Started session-26.scope. Feb 12 20:25:55.940892 kubelet[1994]: E0212 20:25:55.940855 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:55.941637 env[1114]: time="2024-02-12T20:25:55.941331079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5tfx,Uid:d45ab8d0-7d84-4433-8984-d89d3efd0ca2,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:56.018671 env[1114]: time="2024-02-12T20:25:56.018527357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:56.018671 env[1114]: time="2024-02-12T20:25:56.018575679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:56.018671 env[1114]: time="2024-02-12T20:25:56.018589525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:56.018883 env[1114]: time="2024-02-12T20:25:56.018839090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8 pid=3998 runtime=io.containerd.runc.v2 Feb 12 20:25:56.031808 systemd[1]: Started cri-containerd-ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8.scope. Feb 12 20:25:56.052490 env[1114]: time="2024-02-12T20:25:56.052444801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5tfx,Uid:d45ab8d0-7d84-4433-8984-d89d3efd0ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\"" Feb 12 20:25:56.053223 kubelet[1994]: E0212 20:25:56.053203 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:56.055252 env[1114]: time="2024-02-12T20:25:56.055197909Z" level=info msg="CreateContainer within sandbox \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:56.264503 env[1114]: time="2024-02-12T20:25:56.264437905Z" level=info msg="CreateContainer within sandbox \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\"" Feb 12 20:25:56.265061 env[1114]: time="2024-02-12T20:25:56.265015072Z" level=info msg="StartContainer for \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\"" Feb 12 20:25:56.277795 systemd[1]: Started cri-containerd-b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b.scope. Feb 12 20:25:56.287100 systemd[1]: cri-containerd-b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b.scope: Deactivated successfully. Feb 12 20:25:56.287366 systemd[1]: Stopped cri-containerd-b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b.scope. Feb 12 20:25:56.407401 env[1114]: time="2024-02-12T20:25:56.407315344Z" level=info msg="shim disconnected" id=b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b Feb 12 20:25:56.407401 env[1114]: time="2024-02-12T20:25:56.407382030Z" level=warning msg="cleaning up after shim disconnected" id=b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b namespace=k8s.io Feb 12 20:25:56.407401 env[1114]: time="2024-02-12T20:25:56.407393422Z" level=info msg="cleaning up dead shim" Feb 12 20:25:56.414567 env[1114]: time="2024-02-12T20:25:56.414453424Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:25:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:25:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:25:56.414989 env[1114]: time="2024-02-12T20:25:56.414855159Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 20:25:56.415258 env[1114]: time="2024-02-12T20:25:56.415176871Z" level=error msg="Failed to pipe stderr of container \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\"" error="reading from a closed fifo" Feb 12 20:25:56.417685 env[1114]: time="2024-02-12T20:25:56.417629588Z" level=error msg="Failed to pipe stdout of container \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\"" error="reading from a closed fifo" Feb 12 20:25:56.483936 env[1114]: time="2024-02-12T20:25:56.483847268Z" level=error msg="StartContainer for \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:25:56.484255 kubelet[1994]: E0212 20:25:56.484212 1994 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b" Feb 12 20:25:56.484377 kubelet[1994]: E0212 20:25:56.484366 1994 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:25:56.484377 kubelet[1994]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:25:56.484377 kubelet[1994]: rm /hostbin/cilium-mount Feb 12 20:25:56.484377 kubelet[1994]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f7f89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s5tfx_kube-system(d45ab8d0-7d84-4433-8984-d89d3efd0ca2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:25:56.484542 kubelet[1994]: E0212 20:25:56.484406 1994 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s5tfx" podUID=d45ab8d0-7d84-4433-8984-d89d3efd0ca2 Feb 12 20:25:56.724990 env[1114]: time="2024-02-12T20:25:56.724877580Z" level=info msg="StopPodSandbox for \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\"" Feb 12 20:25:56.724990 env[1114]: time="2024-02-12T20:25:56.724933055Z" level=info msg="Container to stop \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:25:56.730455 systemd[1]: cri-containerd-ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8.scope: Deactivated successfully. Feb 12 20:25:56.830899 env[1114]: time="2024-02-12T20:25:56.830826386Z" level=info msg="shim disconnected" id=ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8 Feb 12 20:25:56.830899 env[1114]: time="2024-02-12T20:25:56.830878765Z" level=warning msg="cleaning up after shim disconnected" id=ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8 namespace=k8s.io Feb 12 20:25:56.830899 env[1114]: time="2024-02-12T20:25:56.830886932Z" level=info msg="cleaning up dead shim" Feb 12 20:25:56.837338 env[1114]: time="2024-02-12T20:25:56.837297118Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Feb 12 20:25:56.837645 env[1114]: time="2024-02-12T20:25:56.837610644Z" level=info msg="TearDown network for sandbox \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\" successfully" Feb 12 20:25:56.837675 env[1114]: time="2024-02-12T20:25:56.837639639Z" level=info msg="StopPodSandbox for \"ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8\" returns successfully" Feb 12 20:25:56.869352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8-rootfs.mount: Deactivated successfully. Feb 12 20:25:56.869474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba32326a0fe7802b7b6aa86308c12fe81a4d4747908832cb372e684b23c5a7a8-shm.mount: Deactivated successfully. Feb 12 20:25:56.871178 kubelet[1994]: I0212 20:25:56.871136 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-xtables-lock\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871178 kubelet[1994]: I0212 20:25:56.871186 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-clustermesh-secrets\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871205 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-cgroup\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871222 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cni-path\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871246 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-net\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871241 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871266 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7f89\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-kube-api-access-f7f89\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871572 kubelet[1994]: I0212 20:25:56.871342 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hostproc\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871377 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-run\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871399 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-lib-modules\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871420 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-etc-cni-netd\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871442 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-bpf-maps\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871476 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-config-path\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871726 kubelet[1994]: I0212 20:25:56.871501 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-kernel\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871537 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hubble-tls\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871565 1994 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-ipsec-secrets\") pod \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\" (UID: \"d45ab8d0-7d84-4433-8984-d89d3efd0ca2\") " Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871560 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871596 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hostproc" (OuterVolumeSpecName: "hostproc") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871613 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.871870 kubelet[1994]: I0212 20:25:56.871625 1994 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.872007 kubelet[1994]: I0212 20:25:56.871643 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.872007 kubelet[1994]: I0212 20:25:56.871653 1994 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.872007 kubelet[1994]: I0212 20:25:56.871672 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.872007 kubelet[1994]: I0212 20:25:56.871689 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cni-path" (OuterVolumeSpecName: "cni-path") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.872007 kubelet[1994]: I0212 20:25:56.871705 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.872119 kubelet[1994]: I0212 20:25:56.871720 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.872119 kubelet[1994]: W0212 20:25:56.871804 1994 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d45ab8d0-7d84-4433-8984-d89d3efd0ca2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:25:56.872119 kubelet[1994]: I0212 20:25:56.871914 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:25:56.873636 kubelet[1994]: I0212 20:25:56.873506 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-kube-api-access-f7f89" (OuterVolumeSpecName: "kube-api-access-f7f89") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "kube-api-access-f7f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:56.873636 kubelet[1994]: I0212 20:25:56.873592 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:25:56.874540 systemd[1]: var-lib-kubelet-pods-d45ab8d0\x2d7d84\x2d4433\x2d8984\x2dd89d3efd0ca2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df7f89.mount: Deactivated successfully. Feb 12 20:25:56.877834 kubelet[1994]: I0212 20:25:56.875715 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:56.877834 kubelet[1994]: I0212 20:25:56.876369 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:25:56.876541 systemd[1]: var-lib-kubelet-pods-d45ab8d0\x2d7d84\x2d4433\x2d8984\x2dd89d3efd0ca2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:56.876648 systemd[1]: var-lib-kubelet-pods-d45ab8d0\x2d7d84\x2d4433\x2d8984\x2dd89d3efd0ca2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:25:56.878358 systemd[1]: var-lib-kubelet-pods-d45ab8d0\x2d7d84\x2d4433\x2d8984\x2dd89d3efd0ca2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:25:56.878544 kubelet[1994]: I0212 20:25:56.878506 1994 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d45ab8d0-7d84-4433-8984-d89d3efd0ca2" (UID: "d45ab8d0-7d84-4433-8984-d89d3efd0ca2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:25:56.971866 kubelet[1994]: I0212 20:25:56.971815 1994 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.971866 kubelet[1994]: I0212 20:25:56.971854 1994 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-f7f89\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-kube-api-access-f7f89\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.971866 kubelet[1994]: I0212 20:25:56.971864 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.971866 kubelet[1994]: I0212 20:25:56.971872 1994 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.971866 kubelet[1994]: I0212 20:25:56.971881 1994 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971889 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971897 1994 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971905 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971913 1994 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971920 1994 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971928 1994 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971938 1994 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:56.972224 kubelet[1994]: I0212 20:25:56.971946 1994 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d45ab8d0-7d84-4433-8984-d89d3efd0ca2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:25:57.727317 kubelet[1994]: I0212 20:25:57.727286 1994 scope.go:115] "RemoveContainer" containerID="b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b" Feb 12 20:25:57.728714 env[1114]: time="2024-02-12T20:25:57.728138592Z" level=info msg="RemoveContainer for \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\"" Feb 12 20:25:57.730752 systemd[1]: Removed slice kubepods-burstable-podd45ab8d0_7d84_4433_8984_d89d3efd0ca2.slice. Feb 12 20:25:57.764653 env[1114]: time="2024-02-12T20:25:57.764581428Z" level=info msg="RemoveContainer for \"b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b\" returns successfully" Feb 12 20:25:57.791856 kubelet[1994]: I0212 20:25:57.791815 1994 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:57.792105 kubelet[1994]: E0212 20:25:57.792089 1994 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d45ab8d0-7d84-4433-8984-d89d3efd0ca2" containerName="mount-cgroup" Feb 12 20:25:57.792224 kubelet[1994]: I0212 20:25:57.792208 1994 memory_manager.go:346] "RemoveStaleState removing state" podUID="d45ab8d0-7d84-4433-8984-d89d3efd0ca2" containerName="mount-cgroup" Feb 12 20:25:57.797595 systemd[1]: Created slice kubepods-burstable-pod01f0a580_9dc7_40cf_8f9f_07aa0fca85fc.slice. Feb 12 20:25:57.978498 kubelet[1994]: I0212 20:25:57.978380 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-cilium-cgroup\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978498 kubelet[1994]: I0212 20:25:57.978426 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-clustermesh-secrets\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978498 kubelet[1994]: I0212 20:25:57.978448 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-hostproc\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978506 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-bpf-maps\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978545 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-lib-modules\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978612 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-cilium-config-path\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978655 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlg5\" (UniqueName: \"kubernetes.io/projected/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-kube-api-access-ddlg5\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978708 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-cni-path\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.978958 kubelet[1994]: I0212 20:25:57.978755 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-cilium-run\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978783 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-xtables-lock\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978821 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-etc-cni-netd\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978855 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-cilium-ipsec-secrets\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978882 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-host-proc-sys-net\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978905 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-hubble-tls\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:57.979235 kubelet[1994]: I0212 20:25:57.978957 1994 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01f0a580-9dc7-40cf-8f9f-07aa0fca85fc-host-proc-sys-kernel\") pod \"cilium-sdl7l\" (UID: \"01f0a580-9dc7-40cf-8f9f-07aa0fca85fc\") " pod="kube-system/cilium-sdl7l" Feb 12 20:25:58.099716 kubelet[1994]: E0212 20:25:58.099674 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.100198 env[1114]: time="2024-02-12T20:25:58.100134680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdl7l,Uid:01f0a580-9dc7-40cf-8f9f-07aa0fca85fc,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:58.200119 env[1114]: time="2024-02-12T20:25:58.200051326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:58.200119 env[1114]: time="2024-02-12T20:25:58.200094859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:58.200119 env[1114]: time="2024-02-12T20:25:58.200108234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:58.200339 env[1114]: time="2024-02-12T20:25:58.200261776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2 pid=4115 runtime=io.containerd.runc.v2 Feb 12 20:25:58.209781 systemd[1]: Started cri-containerd-37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2.scope. Feb 12 20:25:58.230611 env[1114]: time="2024-02-12T20:25:58.230510492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdl7l,Uid:01f0a580-9dc7-40cf-8f9f-07aa0fca85fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\"" Feb 12 20:25:58.231737 kubelet[1994]: E0212 20:25:58.231721 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.234476 env[1114]: time="2024-02-12T20:25:58.234431858Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:58.461544 env[1114]: time="2024-02-12T20:25:58.460848617Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d\"" Feb 12 20:25:58.465592 env[1114]: time="2024-02-12T20:25:58.465529466Z" level=info msg="StartContainer for \"9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d\"" Feb 12 20:25:58.477036 systemd[1]: Started cri-containerd-9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d.scope. Feb 12 20:25:58.505599 systemd[1]: cri-containerd-9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d.scope: Deactivated successfully. Feb 12 20:25:58.537647 kubelet[1994]: E0212 20:25:58.537619 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.575736 kubelet[1994]: E0212 20:25:58.575705 1994 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:25:58.601778 env[1114]: time="2024-02-12T20:25:58.601683144Z" level=info msg="StartContainer for \"9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d\" returns successfully" Feb 12 20:25:58.602928 kubelet[1994]: I0212 20:25:58.602896 1994 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d45ab8d0-7d84-4433-8984-d89d3efd0ca2 path="/var/lib/kubelet/pods/d45ab8d0-7d84-4433-8984-d89d3efd0ca2/volumes" Feb 12 20:25:58.701900 env[1114]: time="2024-02-12T20:25:58.701849715Z" level=info msg="shim disconnected" id=9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d Feb 12 20:25:58.701900 env[1114]: time="2024-02-12T20:25:58.701898648Z" level=warning msg="cleaning up after shim disconnected" id=9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d namespace=k8s.io Feb 12 20:25:58.702160 env[1114]: time="2024-02-12T20:25:58.701909800Z" level=info msg="cleaning up dead shim" Feb 12 20:25:58.707365 env[1114]: time="2024-02-12T20:25:58.707326118Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4197 runtime=io.containerd.runc.v2\n" Feb 12 20:25:58.768853 kubelet[1994]: E0212 20:25:58.768708 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:58.770387 env[1114]: time="2024-02-12T20:25:58.770088828Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:25:58.930504 env[1114]: time="2024-02-12T20:25:58.930434496Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e\"" Feb 12 20:25:58.930960 env[1114]: time="2024-02-12T20:25:58.930920309Z" level=info msg="StartContainer for \"3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e\"" Feb 12 20:25:58.944582 systemd[1]: Started cri-containerd-3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e.scope. Feb 12 20:25:58.970775 systemd[1]: cri-containerd-3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e.scope: Deactivated successfully. Feb 12 20:25:59.053365 env[1114]: time="2024-02-12T20:25:59.053305346Z" level=info msg="StartContainer for \"3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e\" returns successfully" Feb 12 20:25:59.225280 env[1114]: time="2024-02-12T20:25:59.225236983Z" level=info msg="shim disconnected" id=3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e Feb 12 20:25:59.225571 env[1114]: time="2024-02-12T20:25:59.225543045Z" level=warning msg="cleaning up after shim disconnected" id=3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e namespace=k8s.io Feb 12 20:25:59.225571 env[1114]: time="2024-02-12T20:25:59.225561419Z" level=info msg="cleaning up dead shim" Feb 12 20:25:59.231664 env[1114]: time="2024-02-12T20:25:59.231604907Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4258 runtime=io.containerd.runc.v2\n" Feb 12 20:25:59.513167 kubelet[1994]: W0212 20:25:59.513020 1994 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd45ab8d0_7d84_4433_8984_d89d3efd0ca2.slice/cri-containerd-b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b.scope WatchSource:0}: container "b7ed54104d46480761466d390284092571f709152c571634de45d101f39f4c0b" in namespace "k8s.io": not found Feb 12 20:25:59.772012 kubelet[1994]: E0212 20:25:59.771727 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:25:59.773389 env[1114]: time="2024-02-12T20:25:59.773318985Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:25:59.962752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973903297.mount: Deactivated successfully. Feb 12 20:26:00.153999 env[1114]: time="2024-02-12T20:26:00.153937852Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71\"" Feb 12 20:26:00.154795 env[1114]: time="2024-02-12T20:26:00.154762809Z" level=info msg="StartContainer for \"0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71\"" Feb 12 20:26:00.172278 systemd[1]: Started cri-containerd-0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71.scope. Feb 12 20:26:00.197058 systemd[1]: cri-containerd-0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71.scope: Deactivated successfully. Feb 12 20:26:00.253106 env[1114]: time="2024-02-12T20:26:00.253042498Z" level=info msg="StartContainer for \"0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71\" returns successfully" Feb 12 20:26:00.267930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71-rootfs.mount: Deactivated successfully. Feb 12 20:26:00.329335 env[1114]: time="2024-02-12T20:26:00.329280222Z" level=info msg="shim disconnected" id=0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71 Feb 12 20:26:00.329335 env[1114]: time="2024-02-12T20:26:00.329332390Z" level=warning msg="cleaning up after shim disconnected" id=0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71 namespace=k8s.io Feb 12 20:26:00.329640 env[1114]: time="2024-02-12T20:26:00.329344042Z" level=info msg="cleaning up dead shim" Feb 12 20:26:00.335022 env[1114]: time="2024-02-12T20:26:00.334987426Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4315 runtime=io.containerd.runc.v2\n" Feb 12 20:26:00.775479 kubelet[1994]: E0212 20:26:00.775439 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:00.779088 env[1114]: time="2024-02-12T20:26:00.779041215Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:26:00.889568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356572149.mount: Deactivated successfully. Feb 12 20:26:00.984055 env[1114]: time="2024-02-12T20:26:00.983985708Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456\"" Feb 12 20:26:00.984686 env[1114]: time="2024-02-12T20:26:00.984606447Z" level=info msg="StartContainer for \"fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456\"" Feb 12 20:26:00.997684 systemd[1]: Started cri-containerd-fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456.scope. Feb 12 20:26:01.019598 systemd[1]: cri-containerd-fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456.scope: Deactivated successfully. Feb 12 20:26:01.090412 env[1114]: time="2024-02-12T20:26:01.090341985Z" level=info msg="StartContainer for \"fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456\" returns successfully" Feb 12 20:26:01.112953 kubelet[1994]: I0212 20:26:01.112931 1994 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:26:01.112886102 +0000 UTC m=+102.695237311 LastTransitionTime:2024-02-12 20:26:01.112886102 +0000 UTC m=+102.695237311 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:26:01.179723 env[1114]: time="2024-02-12T20:26:01.179666667Z" level=info msg="shim disconnected" id=fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456 Feb 12 20:26:01.179723 env[1114]: time="2024-02-12T20:26:01.179710431Z" level=warning msg="cleaning up after shim disconnected" id=fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456 namespace=k8s.io Feb 12 20:26:01.179723 env[1114]: time="2024-02-12T20:26:01.179720269Z" level=info msg="cleaning up dead shim" Feb 12 20:26:01.186713 env[1114]: time="2024-02-12T20:26:01.186658770Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4368 runtime=io.containerd.runc.v2\n" Feb 12 20:26:01.779737 kubelet[1994]: E0212 20:26:01.779696 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:01.782548 env[1114]: time="2024-02-12T20:26:01.781735106Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:26:02.038242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170079905.mount: Deactivated successfully. Feb 12 20:26:02.130861 env[1114]: time="2024-02-12T20:26:02.130816108Z" level=info msg="CreateContainer within sandbox \"37c7560e29ff16790382a4e1e167a9cb6912a11113fc8d277bb5e79d7e063da2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d\"" Feb 12 20:26:02.131719 env[1114]: time="2024-02-12T20:26:02.131673506Z" level=info msg="StartContainer for \"cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d\"" Feb 12 20:26:02.148669 systemd[1]: Started cri-containerd-cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d.scope. Feb 12 20:26:02.277490 env[1114]: time="2024-02-12T20:26:02.277432971Z" level=info msg="StartContainer for \"cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d\" returns successfully" Feb 12 20:26:02.291394 systemd[1]: run-containerd-runc-k8s.io-cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d-runc.gLUNz1.mount: Deactivated successfully. Feb 12 20:26:02.410186 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:26:02.621512 kubelet[1994]: W0212 20:26:02.621392 1994 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f0a580_9dc7_40cf_8f9f_07aa0fca85fc.slice/cri-containerd-9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d.scope WatchSource:0}: task 9dc720065bb7ff28203f3debfbac54c3957952b3ce7803918384f9d4b804ba0d not found: not found Feb 12 20:26:02.784367 kubelet[1994]: E0212 20:26:02.784339 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:03.786594 kubelet[1994]: E0212 20:26:03.786563 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:04.788329 kubelet[1994]: E0212 20:26:04.788296 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:04.876038 systemd-networkd[1028]: lxc_health: Link UP Feb 12 20:26:04.892958 systemd-networkd[1028]: lxc_health: Gained carrier Feb 12 20:26:04.893229 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:26:05.728573 kubelet[1994]: W0212 20:26:05.728520 1994 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f0a580_9dc7_40cf_8f9f_07aa0fca85fc.slice/cri-containerd-3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e.scope WatchSource:0}: task 3c22c950a7e9a1b3e55dfdbb994f62495c44c5f1d06edcd2379b8833795f723e not found: not found Feb 12 20:26:06.101949 kubelet[1994]: E0212 20:26:06.101925 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:06.112413 kubelet[1994]: I0212 20:26:06.112375 1994 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sdl7l" podStartSLOduration=9.112338112 pod.CreationTimestamp="2024-02-12 20:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:02.937842239 +0000 UTC m=+104.520193448" watchObservedRunningTime="2024-02-12 20:26:06.112338112 +0000 UTC m=+107.694689321" Feb 12 20:26:06.176384 systemd-networkd[1028]: lxc_health: Gained IPv6LL Feb 12 20:26:06.419223 systemd[1]: run-containerd-runc-k8s.io-cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d-runc.lP8Sdr.mount: Deactivated successfully. Feb 12 20:26:06.537543 kubelet[1994]: E0212 20:26:06.537512 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:06.791418 kubelet[1994]: E0212 20:26:06.791397 1994 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:08.838785 kubelet[1994]: W0212 20:26:08.838743 1994 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f0a580_9dc7_40cf_8f9f_07aa0fca85fc.slice/cri-containerd-0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71.scope WatchSource:0}: task 0bd52a8c8dad0529a20a752d05c8f235dcedfec881d5fa0a22f3c7cdb2227b71 not found: not found Feb 12 20:26:10.586507 systemd[1]: run-containerd-runc-k8s.io-cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d-runc.eNbmym.mount: Deactivated successfully. Feb 12 20:26:11.944726 kubelet[1994]: W0212 20:26:11.944682 1994 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f0a580_9dc7_40cf_8f9f_07aa0fca85fc.slice/cri-containerd-fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456.scope WatchSource:0}: task fe7d4e86f7df68a71c97d1600eb3564f167e91ddd17a25a346a1dff743120456 not found: not found Feb 12 20:26:12.670201 systemd[1]: run-containerd-runc-k8s.io-cfb753c756e872da2b9440411b0d8d65cc3522b7ee35dc0b02c0c2f5d69b3b6d-runc.3wUENI.mount: Deactivated successfully. Feb 12 20:26:12.714490 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:12.717006 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:58462.service: Deactivated successfully. Feb 12 20:26:12.717841 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 20:26:12.718481 systemd-logind[1107]: Session 26 logged out. Waiting for processes to exit. Feb 12 20:26:12.719252 systemd-logind[1107]: Removed session 26.