Feb 12 20:25:25.844471 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:25:25.844490 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:25.844498 kernel: BIOS-provided physical RAM map: Feb 12 20:25:25.844504 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:25:25.844518 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:25:25.844524 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:25:25.844567 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:25:25.844573 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:25:25.844580 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:25:25.844586 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:25:25.844591 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:25:25.844597 kernel: NX (Execute Disable) protection: active Feb 12 20:25:25.844602 kernel: SMBIOS 2.8 present. Feb 12 20:25:25.844608 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:25:25.844617 kernel: Hypervisor detected: KVM Feb 12 20:25:25.844623 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:25:25.844629 kernel: kvm-clock: cpu 0, msr 84faa001, primary cpu clock Feb 12 20:25:25.844635 kernel: kvm-clock: using sched offset of 2161177451 cycles Feb 12 20:25:25.844641 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:25:25.844647 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:25:25.844654 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:25:25.844660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:25:25.844666 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:25:25.844674 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:25:25.844680 kernel: Using GB pages for direct mapping Feb 12 20:25:25.844686 kernel: ACPI: Early table checksum verification disabled Feb 12 20:25:25.844692 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:25:25.844698 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844705 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844711 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844717 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:25:25.844723 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844730 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844737 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:25:25.844743 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:25:25.844749 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:25:25.844755 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:25:25.844761 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:25:25.844767 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:25:25.844773 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:25:25.844783 kernel: No NUMA configuration found Feb 12 20:25:25.844790 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:25:25.844796 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:25:25.844803 kernel: Zone ranges: Feb 12 20:25:25.844809 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:25:25.844816 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:25:25.844824 kernel: Normal empty Feb 12 20:25:25.844830 kernel: Movable zone start for each node Feb 12 20:25:25.844837 kernel: Early memory node ranges Feb 12 20:25:25.844843 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:25:25.844850 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:25:25.844856 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:25:25.844863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:25:25.844869 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:25:25.844876 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:25:25.844884 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:25:25.844891 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:25:25.844897 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:25:25.844904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:25:25.844911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:25:25.844917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:25:25.844924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:25:25.844931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:25:25.844937 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:25:25.844945 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:25:25.844951 kernel: TSC deadline timer available Feb 12 20:25:25.844958 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:25:25.844964 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:25:25.844971 kernel: kvm-guest: setup PV sched yield Feb 12 20:25:25.844978 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:25:25.844985 kernel: Booting paravirtualized kernel on KVM Feb 12 20:25:25.844991 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:25:25.844998 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:25:25.845005 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:25:25.845013 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:25:25.845019 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:25:25.845026 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:25:25.845032 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:25:25.845039 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:25:25.845045 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:25:25.845052 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:25:25.845058 kernel: Policy zone: DMA32 Feb 12 20:25:25.845066 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:25.845074 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:25:25.845081 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:25:25.845088 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:25:25.845094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:25:25.845102 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:25:25.845108 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:25:25.845115 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:25:25.845121 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:25:25.845130 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:25:25.845137 kernel: rcu: RCU event tracing is enabled. Feb 12 20:25:25.845144 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:25:25.845151 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:25:25.845176 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:25:25.845183 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:25:25.845190 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:25:25.845197 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:25:25.845203 kernel: random: crng init done Feb 12 20:25:25.845211 kernel: Console: colour VGA+ 80x25 Feb 12 20:25:25.845218 kernel: printk: console [ttyS0] enabled Feb 12 20:25:25.845224 kernel: ACPI: Core revision 20210730 Feb 12 20:25:25.845231 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:25:25.845238 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:25:25.845244 kernel: x2apic enabled Feb 12 20:25:25.845251 kernel: Switched APIC routing to physical x2apic. Feb 12 20:25:25.845258 kernel: kvm-guest: setup PV IPIs Feb 12 20:25:25.845264 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:25:25.845272 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:25:25.845279 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:25:25.845285 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:25:25.845292 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:25:25.845299 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:25:25.845305 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:25:25.845312 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:25:25.845319 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:25:25.845325 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:25:25.845338 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:25:25.845345 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:25:25.845352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:25:25.845360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:25:25.845367 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:25:25.845374 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:25:25.845381 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:25:25.845388 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:25:25.845396 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:25:25.845404 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:25:25.845411 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:25:25.845418 kernel: LSM: Security Framework initializing Feb 12 20:25:25.845424 kernel: SELinux: Initializing. Feb 12 20:25:25.845432 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:25:25.845439 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:25:25.845446 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:25:25.845454 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:25:25.845461 kernel: ... version: 0 Feb 12 20:25:25.845468 kernel: ... bit width: 48 Feb 12 20:25:25.845474 kernel: ... generic registers: 6 Feb 12 20:25:25.845481 kernel: ... value mask: 0000ffffffffffff Feb 12 20:25:25.845488 kernel: ... max period: 00007fffffffffff Feb 12 20:25:25.845495 kernel: ... fixed-purpose events: 0 Feb 12 20:25:25.845502 kernel: ... event mask: 000000000000003f Feb 12 20:25:25.845518 kernel: signal: max sigframe size: 1776 Feb 12 20:25:25.845526 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:25:25.845534 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:25:25.845541 kernel: x86: Booting SMP configuration: Feb 12 20:25:25.845548 kernel: .... node #0, CPUs: #1 Feb 12 20:25:25.845555 kernel: kvm-clock: cpu 1, msr 84faa041, secondary cpu clock Feb 12 20:25:25.845562 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:25:25.845568 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:25:25.845575 kernel: #2 Feb 12 20:25:25.845583 kernel: kvm-clock: cpu 2, msr 84faa081, secondary cpu clock Feb 12 20:25:25.845589 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:25:25.845597 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:25:25.845604 kernel: #3 Feb 12 20:25:25.845611 kernel: kvm-clock: cpu 3, msr 84faa0c1, secondary cpu clock Feb 12 20:25:25.845618 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:25:25.845625 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:25:25.845632 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:25:25.845639 kernel: smpboot: Max logical packages: 1 Feb 12 20:25:25.845646 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:25:25.845653 kernel: devtmpfs: initialized Feb 12 20:25:25.845662 kernel: x86/mm: Memory block size: 128MB Feb 12 20:25:25.845669 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:25:25.845676 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:25:25.845683 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:25:25.845690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:25:25.845697 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:25:25.845704 kernel: audit: type=2000 audit(1707769525.692:1): state=initialized audit_enabled=0 res=1 Feb 12 20:25:25.845711 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:25:25.845718 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:25:25.845726 kernel: cpuidle: using governor menu Feb 12 20:25:25.845733 kernel: ACPI: bus type PCI registered Feb 12 20:25:25.845740 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:25:25.845747 kernel: dca service started, version 1.12.1 Feb 12 20:25:25.845753 kernel: PCI: Using configuration type 1 for base access Feb 12 20:25:25.845760 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:25:25.845768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:25:25.845775 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:25:25.845782 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:25:25.845790 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:25:25.845797 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:25:25.845804 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:25:25.845811 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:25:25.845818 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:25:25.845825 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:25:25.845832 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:25:25.845839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:25:25.845846 kernel: ACPI: Interpreter enabled Feb 12 20:25:25.845854 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:25:25.845861 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:25:25.845868 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:25:25.845875 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:25:25.845882 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:25:25.845998 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:25:25.846009 kernel: acpiphp: Slot [3] registered Feb 12 20:25:25.846016 kernel: acpiphp: Slot [4] registered Feb 12 20:25:25.846025 kernel: acpiphp: Slot [5] registered Feb 12 20:25:25.846032 kernel: acpiphp: Slot [6] registered Feb 12 20:25:25.846039 kernel: acpiphp: Slot [7] registered Feb 12 20:25:25.846046 kernel: acpiphp: Slot [8] registered Feb 12 20:25:25.846053 kernel: acpiphp: Slot [9] registered Feb 12 20:25:25.846060 kernel: acpiphp: Slot [10] registered Feb 12 20:25:25.846067 kernel: acpiphp: Slot [11] registered Feb 12 20:25:25.846074 kernel: acpiphp: Slot [12] registered Feb 12 20:25:25.846081 kernel: acpiphp: Slot [13] registered Feb 12 20:25:25.846087 kernel: acpiphp: Slot [14] registered Feb 12 20:25:25.846096 kernel: acpiphp: Slot [15] registered Feb 12 20:25:25.846105 kernel: acpiphp: Slot [16] registered Feb 12 20:25:25.846113 kernel: acpiphp: Slot [17] registered Feb 12 20:25:25.846122 kernel: acpiphp: Slot [18] registered Feb 12 20:25:25.846130 kernel: acpiphp: Slot [19] registered Feb 12 20:25:25.846137 kernel: acpiphp: Slot [20] registered Feb 12 20:25:25.846144 kernel: acpiphp: Slot [21] registered Feb 12 20:25:25.846151 kernel: acpiphp: Slot [22] registered Feb 12 20:25:25.846166 kernel: acpiphp: Slot [23] registered Feb 12 20:25:25.846175 kernel: acpiphp: Slot [24] registered Feb 12 20:25:25.846182 kernel: acpiphp: Slot [25] registered Feb 12 20:25:25.846189 kernel: acpiphp: Slot [26] registered Feb 12 20:25:25.846196 kernel: acpiphp: Slot [27] registered Feb 12 20:25:25.846202 kernel: acpiphp: Slot [28] registered Feb 12 20:25:25.846209 kernel: acpiphp: Slot [29] registered Feb 12 20:25:25.846216 kernel: acpiphp: Slot [30] registered Feb 12 20:25:25.846223 kernel: acpiphp: Slot [31] registered Feb 12 20:25:25.846230 kernel: PCI host bridge to bus 0000:00 Feb 12 20:25:25.846310 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:25:25.846376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:25:25.846437 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:25:25.846497 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:25:25.846571 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:25:25.846632 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:25:25.846712 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:25:25.846791 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:25:25.846871 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:25:25.846940 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:25:25.847008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:25:25.847074 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:25:25.847140 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:25:25.847229 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:25:25.847308 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:25:25.847377 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:25:25.847445 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:25:25.847535 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:25:25.847607 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:25:25.847676 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:25:25.847747 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:25:25.847815 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:25:25.847898 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:25:25.847969 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:25:25.848041 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:25:25.848110 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:25:25.848198 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:25:25.848272 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:25:25.848341 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:25:25.848410 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:25:25.848492 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:25:25.848583 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:25:25.848653 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:25:25.848727 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:25:25.848801 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:25:25.848810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:25:25.848817 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:25:25.848824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:25:25.848831 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:25:25.848839 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:25:25.848846 kernel: iommu: Default domain type: Translated Feb 12 20:25:25.848853 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:25:25.848921 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:25:25.848992 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:25:25.849061 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:25:25.849070 kernel: vgaarb: loaded Feb 12 20:25:25.849077 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:25:25.849085 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:25:25.849092 kernel: PTP clock support registered Feb 12 20:25:25.849099 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:25:25.849106 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:25:25.849115 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:25:25.849122 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:25:25.849129 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:25:25.849136 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:25:25.849143 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:25:25.849150 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:25:25.849165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:25:25.849172 kernel: pnp: PnP ACPI init Feb 12 20:25:25.849248 kernel: pnp 00:02: [dma 2] Feb 12 20:25:25.849260 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:25:25.849268 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:25:25.849275 kernel: NET: Registered PF_INET protocol family Feb 12 20:25:25.849282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:25:25.849289 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:25:25.849296 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:25:25.849303 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:25:25.849310 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:25:25.849319 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:25:25.849326 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:25:25.849333 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:25:25.849340 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:25:25.849347 kernel: NET: Registered PF_XDP protocol family Feb 12 20:25:25.849410 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:25:25.849476 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:25:25.849549 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:25:25.849612 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:25:25.849676 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:25:25.849745 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:25:25.849814 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:25:25.849882 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:25:25.849891 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:25:25.849898 kernel: Initialise system trusted keyrings Feb 12 20:25:25.849905 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:25:25.849912 kernel: Key type asymmetric registered Feb 12 20:25:25.849921 kernel: Asymmetric key parser 'x509' registered Feb 12 20:25:25.849928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:25:25.849935 kernel: io scheduler mq-deadline registered Feb 12 20:25:25.849942 kernel: io scheduler kyber registered Feb 12 20:25:25.849950 kernel: io scheduler bfq registered Feb 12 20:25:25.849957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:25:25.849964 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:25:25.849971 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:25:25.849978 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:25:25.849986 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:25:25.849993 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:25:25.850000 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:25:25.850007 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:25:25.850014 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:25:25.850022 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:25:25.850097 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:25:25.850171 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:25:25.850238 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:25:25 UTC (1707769525) Feb 12 20:25:25.850301 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:25:25.850309 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:25:25.850316 kernel: Segment Routing with IPv6 Feb 12 20:25:25.850324 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:25:25.850331 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:25:25.850338 kernel: Key type dns_resolver registered Feb 12 20:25:25.850344 kernel: IPI shorthand broadcast: enabled Feb 12 20:25:25.850351 kernel: sched_clock: Marking stable (339169168, 70981412)->(434264747, -24114167) Feb 12 20:25:25.850360 kernel: registered taskstats version 1 Feb 12 20:25:25.850367 kernel: Loading compiled-in X.509 certificates Feb 12 20:25:25.850374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:25:25.850381 kernel: Key type .fscrypt registered Feb 12 20:25:25.850388 kernel: Key type fscrypt-provisioning registered Feb 12 20:25:25.850395 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:25:25.850402 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:25:25.850409 kernel: ima: No architecture policies found Feb 12 20:25:25.850417 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:25:25.850424 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:25:25.850431 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:25:25.850438 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:25:25.850445 kernel: Run /init as init process Feb 12 20:25:25.850452 kernel: with arguments: Feb 12 20:25:25.850459 kernel: /init Feb 12 20:25:25.850466 kernel: with environment: Feb 12 20:25:25.850481 kernel: HOME=/ Feb 12 20:25:25.850489 kernel: TERM=linux Feb 12 20:25:25.850497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:25:25.850507 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:25.850525 systemd[1]: Detected virtualization kvm. Feb 12 20:25:25.850533 systemd[1]: Detected architecture x86-64. Feb 12 20:25:25.850541 systemd[1]: Running in initrd. Feb 12 20:25:25.850548 systemd[1]: No hostname configured, using default hostname. Feb 12 20:25:25.850555 systemd[1]: Hostname set to . Feb 12 20:25:25.850565 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:25.850573 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:25:25.850580 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:25.850588 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:25.850595 systemd[1]: Reached target paths.target. Feb 12 20:25:25.850603 systemd[1]: Reached target slices.target. Feb 12 20:25:25.850610 systemd[1]: Reached target swap.target. Feb 12 20:25:25.850618 systemd[1]: Reached target timers.target. Feb 12 20:25:25.850627 systemd[1]: Listening on iscsid.socket. Feb 12 20:25:25.850635 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:25:25.850642 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:25:25.850650 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:25:25.850658 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:25:25.850665 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:25.850673 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:25.850680 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:25.850689 systemd[1]: Reached target sockets.target. Feb 12 20:25:25.850697 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:25.850705 systemd[1]: Finished network-cleanup.service. Feb 12 20:25:25.850713 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:25:25.850720 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:25.850728 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:25.850738 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:25.850746 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:25:25.850753 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:25.850761 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:25:25.850769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:25.850779 systemd-journald[199]: Journal started Feb 12 20:25:25.850816 systemd-journald[199]: Runtime Journal (/run/log/journal/b9b5de3a2e3041cd82fa00a096c73581) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:25:25.855344 systemd-modules-load[200]: Inserted module 'overlay' Feb 12 20:25:25.869923 systemd[1]: Started systemd-journald.service. Feb 12 20:25:25.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.870273 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:25:25.870713 systemd-resolved[201]: Positive Trust Anchors: Feb 12 20:25:25.870721 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:25.870750 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:25.872539 kernel: audit: type=1130 audit(1707769525.869:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.872964 systemd-resolved[201]: Defaulting to hostname 'linux'. Feb 12 20:25:25.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.878568 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:25.881778 kernel: audit: type=1130 audit(1707769525.877:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.881900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:25.886427 kernel: audit: type=1130 audit(1707769525.881:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.886440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:25:25.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.887101 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:25.890782 kernel: audit: type=1130 audit(1707769525.886:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.890795 kernel: Bridge firewalling registered Feb 12 20:25:25.890790 systemd-modules-load[200]: Inserted module 'br_netfilter' Feb 12 20:25:25.891871 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:25:25.908286 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:25:25.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.910032 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:25:25.912736 kernel: audit: type=1130 audit(1707769525.908:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.914531 kernel: SCSI subsystem initialized Feb 12 20:25:25.922772 dracut-cmdline[216]: dracut-dracut-053 Feb 12 20:25:25.927819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:25:25.927845 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:25:25.927858 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:25:25.932913 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:25:25.936604 systemd-modules-load[200]: Inserted module 'dm_multipath' Feb 12 20:25:25.937433 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:25.942135 kernel: audit: type=1130 audit(1707769525.937:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.938923 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:25.948712 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:25.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.953541 kernel: audit: type=1130 audit(1707769525.950:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:25.992539 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:25:26.002533 kernel: iscsi: registered transport (tcp) Feb 12 20:25:26.021552 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:25:26.021622 kernel: QLogic iSCSI HBA Driver Feb 12 20:25:26.046613 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:25:26.049745 kernel: audit: type=1130 audit(1707769526.046:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.050474 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:25:26.097552 kernel: raid6: avx2x4 gen() 26364 MB/s Feb 12 20:25:26.114546 kernel: raid6: avx2x4 xor() 7317 MB/s Feb 12 20:25:26.131550 kernel: raid6: avx2x2 gen() 23507 MB/s Feb 12 20:25:26.148547 kernel: raid6: avx2x2 xor() 15361 MB/s Feb 12 20:25:26.165534 kernel: raid6: avx2x1 gen() 19145 MB/s Feb 12 20:25:26.182548 kernel: raid6: avx2x1 xor() 13177 MB/s Feb 12 20:25:26.199568 kernel: raid6: sse2x4 gen() 14685 MB/s Feb 12 20:25:26.216528 kernel: raid6: sse2x4 xor() 7289 MB/s Feb 12 20:25:26.233527 kernel: raid6: sse2x2 gen() 16500 MB/s Feb 12 20:25:26.250530 kernel: raid6: sse2x2 xor() 9867 MB/s Feb 12 20:25:26.267527 kernel: raid6: sse2x1 gen() 12212 MB/s Feb 12 20:25:26.284572 kernel: raid6: sse2x1 xor() 7680 MB/s Feb 12 20:25:26.284593 kernel: raid6: using algorithm avx2x4 gen() 26364 MB/s Feb 12 20:25:26.284612 kernel: raid6: .... xor() 7317 MB/s, rmw enabled Feb 12 20:25:26.285528 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:25:26.296530 kernel: xor: automatically using best checksumming function avx Feb 12 20:25:26.384545 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:25:26.391670 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:25:26.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.394000 audit: BPF prog-id=7 op=LOAD Feb 12 20:25:26.394000 audit: BPF prog-id=8 op=LOAD Feb 12 20:25:26.395314 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:26.396342 kernel: audit: type=1130 audit(1707769526.392:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.406239 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 20:25:26.410089 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:26.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.412996 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:25:26.422059 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 12 20:25:26.445028 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:25:26.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.447155 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:26.484010 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:26.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.510666 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:25:26.521531 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:25:26.529534 kernel: libata version 3.00 loaded. Feb 12 20:25:26.532748 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:25:26.532777 kernel: GPT:9289727 != 19775487 Feb 12 20:25:26.532789 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:25:26.533836 kernel: GPT:9289727 != 19775487 Feb 12 20:25:26.533854 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:25:26.535531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:26.538531 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:25:26.546534 kernel: scsi host0: ata_piix Feb 12 20:25:26.546662 kernel: scsi host1: ata_piix Feb 12 20:25:26.546857 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:25:26.546873 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:25:26.555527 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:25:26.555554 kernel: AES CTR mode by8 optimization enabled Feb 12 20:25:26.563368 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Feb 12 20:25:26.563343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:25:26.579930 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:25:26.585011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:25:26.590753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:25:26.596871 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:26.598420 systemd[1]: Starting disk-uuid.service... Feb 12 20:25:26.629400 disk-uuid[514]: Primary Header is updated. Feb 12 20:25:26.629400 disk-uuid[514]: Secondary Entries is updated. Feb 12 20:25:26.629400 disk-uuid[514]: Secondary Header is updated. Feb 12 20:25:26.632447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:26.634537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:26.703811 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:25:26.705537 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:25:26.739539 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:25:26.739782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:25:26.756531 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:25:27.635556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:25:27.636231 disk-uuid[515]: The operation has completed successfully. Feb 12 20:25:27.658242 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:25:27.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.658353 systemd[1]: Finished disk-uuid.service. Feb 12 20:25:27.672723 systemd[1]: Starting verity-setup.service... Feb 12 20:25:27.686554 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:25:27.706535 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:25:27.708064 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:25:27.710572 systemd[1]: Finished verity-setup.service. Feb 12 20:25:27.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.780547 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:25:27.781027 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:25:27.781725 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:25:27.782336 systemd[1]: Starting ignition-setup.service... Feb 12 20:25:27.784225 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:25:27.791125 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:25:27.791201 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:25:27.791216 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:25:27.799503 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:25:27.806973 systemd[1]: Finished ignition-setup.service. Feb 12 20:25:27.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.809025 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:25:27.848155 ignition[619]: Ignition 2.14.0 Feb 12 20:25:27.848552 ignition[619]: Stage: fetch-offline Feb 12 20:25:27.848601 ignition[619]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:27.848611 ignition[619]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:27.848891 ignition[619]: parsed url from cmdline: "" Feb 12 20:25:27.848895 ignition[619]: no config URL provided Feb 12 20:25:27.848901 ignition[619]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:25:27.848910 ignition[619]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:25:27.848929 ignition[619]: op(1): [started] loading QEMU firmware config module Feb 12 20:25:27.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.867000 audit: BPF prog-id=9 op=LOAD Feb 12 20:25:27.863590 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:25:27.848934 ignition[619]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:25:27.868087 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:27.852996 ignition[619]: op(1): [finished] loading QEMU firmware config module Feb 12 20:25:27.916194 ignition[619]: parsing config with SHA512: aa8e3a7ff86de3de980871f31148fbb36027aab6d0df4520bd47dd9badd9193c7fd126537d12d7479850ed161dbeb2f4c8cab42292cfa1146b7f1eb0e52f812d Feb 12 20:25:27.933277 systemd-networkd[707]: lo: Link UP Feb 12 20:25:27.933287 systemd-networkd[707]: lo: Gained carrier Feb 12 20:25:27.933678 systemd-networkd[707]: Enumeration completed Feb 12 20:25:27.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.933852 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:27.934873 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:27.935608 systemd-networkd[707]: eth0: Link UP Feb 12 20:25:27.935611 systemd-networkd[707]: eth0: Gained carrier Feb 12 20:25:27.936281 systemd[1]: Reached target network.target. Feb 12 20:25:27.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.938070 systemd[1]: Starting iscsiuio.service... Feb 12 20:25:27.944767 systemd[1]: Started iscsiuio.service. Feb 12 20:25:27.946055 systemd[1]: Starting iscsid.service... Feb 12 20:25:27.949756 iscsid[712]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:27.949756 iscsid[712]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:25:27.949756 iscsid[712]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:25:27.949756 iscsid[712]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:25:27.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.956619 iscsid[712]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:25:27.956619 iscsid[712]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:25:27.951063 systemd[1]: Started iscsid.service. Feb 12 20:25:27.958471 ignition[619]: fetch-offline: fetch-offline passed Feb 12 20:25:27.954576 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:25:27.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.958581 ignition[619]: Ignition finished successfully Feb 12 20:25:27.955329 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:25:27.957225 unknown[619]: fetched base config from "system" Feb 12 20:25:27.957235 unknown[619]: fetched user config from "qemu" Feb 12 20:25:27.959845 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:25:27.961224 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:25:27.961827 systemd[1]: Starting ignition-kargs.service... Feb 12 20:25:27.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.964790 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:25:27.966306 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:25:27.967353 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:27.970565 ignition[717]: Ignition 2.14.0 Feb 12 20:25:27.967955 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:27.970571 ignition[717]: Stage: kargs Feb 12 20:25:27.969112 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:25:27.970652 ignition[717]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:27.970660 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:27.971865 ignition[717]: kargs: kargs passed Feb 12 20:25:27.971901 ignition[717]: Ignition finished successfully Feb 12 20:25:27.975432 systemd[1]: Finished ignition-kargs.service. Feb 12 20:25:27.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.976832 systemd[1]: Starting ignition-disks.service... Feb 12 20:25:27.982623 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:25:27.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.983317 ignition[729]: Ignition 2.14.0 Feb 12 20:25:27.983323 ignition[729]: Stage: disks Feb 12 20:25:27.983406 ignition[729]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:27.983414 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:27.984499 ignition[729]: disks: disks passed Feb 12 20:25:27.984542 ignition[729]: Ignition finished successfully Feb 12 20:25:27.987555 systemd[1]: Finished ignition-disks.service. Feb 12 20:25:27.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.988807 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:25:27.988878 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:27.989962 systemd[1]: Reached target local-fs.target. Feb 12 20:25:27.990180 systemd[1]: Reached target sysinit.target. Feb 12 20:25:27.990384 systemd[1]: Reached target basic.target. Feb 12 20:25:27.991625 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:25:28.000203 systemd-fsck[741]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:25:28.004846 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:25:28.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.006734 systemd[1]: Mounting sysroot.mount... Feb 12 20:25:28.012529 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:25:28.012789 systemd[1]: Mounted sysroot.mount. Feb 12 20:25:28.012909 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:25:28.014010 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:25:28.014966 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:25:28.015004 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:25:28.015026 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:25:28.016621 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:25:28.018383 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:25:28.021812 initrd-setup-root[751]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:25:28.024484 initrd-setup-root[759]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:25:28.027160 initrd-setup-root[767]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:25:28.029499 initrd-setup-root[775]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:25:28.055375 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:25:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.056214 systemd[1]: Starting ignition-mount.service... Feb 12 20:25:28.057539 systemd[1]: Starting sysroot-boot.service... Feb 12 20:25:28.061047 bash[792]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:25:28.068540 ignition[793]: INFO : Ignition 2.14.0 Feb 12 20:25:28.068540 ignition[793]: INFO : Stage: mount Feb 12 20:25:28.069895 ignition[793]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:28.069895 ignition[793]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:28.069895 ignition[793]: INFO : mount: mount passed Feb 12 20:25:28.069895 ignition[793]: INFO : Ignition finished successfully Feb 12 20:25:28.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.070543 systemd[1]: Finished ignition-mount.service. Feb 12 20:25:28.074795 systemd[1]: Finished sysroot-boot.service. Feb 12 20:25:28.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.717868 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:25:28.723527 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Feb 12 20:25:28.725586 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:25:28.725607 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:25:28.725617 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:25:28.728341 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:25:28.730170 systemd[1]: Starting ignition-files.service... Feb 12 20:25:28.743269 ignition[822]: INFO : Ignition 2.14.0 Feb 12 20:25:28.743269 ignition[822]: INFO : Stage: files Feb 12 20:25:28.744499 ignition[822]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:28.744499 ignition[822]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:28.747362 ignition[822]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:25:28.748303 ignition[822]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:25:28.748303 ignition[822]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:25:28.751108 ignition[822]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:25:28.752116 ignition[822]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:25:28.753065 ignition[822]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:25:28.752246 unknown[822]: wrote ssh authorized keys file for user: core Feb 12 20:25:28.755011 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:25:28.755011 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:25:28.784559 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:25:28.853956 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:25:28.855458 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:25:28.855458 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:25:29.133691 systemd-networkd[707]: eth0: Gained IPv6LL Feb 12 20:25:29.212421 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:25:29.320007 ignition[822]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:25:29.322170 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:25:29.322170 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:25:29.322170 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:25:29.608563 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:25:29.676992 ignition[822]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:25:29.679695 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:25:29.679695 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:25:29.679695 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:25:29.679695 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:29.679695 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:25:29.746620 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:25:29.976382 ignition[822]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:25:29.976382 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:25:29.979840 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:25:29.979840 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:25:30.122505 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:25:30.340313 ignition[822]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 20:25:30.340313 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:25:30.344165 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:30.344165 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:25:30.387740 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:25:31.030264 ignition[822]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:25:31.040424 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:25:31.040424 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:31.040424 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:25:31.040424 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:25:31.040424 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:25:31.318473 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 20:25:31.411980 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:25:31.411980 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:31.415360 ignition[822]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:25:31.415360 ignition[822]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(17): [started] processing unit "coreos-metadata.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(17): op(18): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(17): op(18): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(17): [finished] processing unit "coreos-metadata.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(19): [started] processing unit "containerd.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(19): op(1a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(19): op(1a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(19): [finished] processing unit "containerd.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1d): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:25:31.434713 ignition[822]: INFO : files: op(1d): op(1e): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:25:31.489257 ignition[822]: INFO : files: op(1d): op(1e): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:25:31.490540 ignition[822]: INFO : files: op(1d): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:25:31.490540 ignition[822]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:31.490540 ignition[822]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:25:31.490540 ignition[822]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:31.490540 ignition[822]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:25:31.490540 ignition[822]: INFO : files: files passed Feb 12 20:25:31.490540 ignition[822]: INFO : Ignition finished successfully Feb 12 20:25:31.524018 systemd[1]: Finished ignition-files.service. Feb 12 20:25:31.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.525490 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:25:31.529057 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 20:25:31.529078 kernel: audit: type=1130 audit(1707769531.524:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.527898 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:25:31.528439 systemd[1]: Starting ignition-quench.service... Feb 12 20:25:31.542607 kernel: audit: type=1130 audit(1707769531.530:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.542634 kernel: audit: type=1131 audit(1707769531.530:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.530339 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:25:31.530405 systemd[1]: Finished ignition-quench.service. Feb 12 20:25:31.544295 initrd-setup-root-after-ignition[848]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:25:31.546982 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:25:31.547649 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:25:31.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.549318 systemd[1]: Reached target ignition-complete.target. Feb 12 20:25:31.553675 kernel: audit: type=1130 audit(1707769531.549:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.553092 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:25:31.565271 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:25:31.565373 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:25:31.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.566801 systemd[1]: Reached target initrd-fs.target. Feb 12 20:25:31.572344 kernel: audit: type=1130 audit(1707769531.566:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.572368 kernel: audit: type=1131 audit(1707769531.566:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.571408 systemd[1]: Reached target initrd.target. Feb 12 20:25:31.572949 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:25:31.573907 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:25:31.583267 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:25:31.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.584564 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:25:31.587594 kernel: audit: type=1130 audit(1707769531.583:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.592383 systemd[1]: Stopped target network.target. Feb 12 20:25:31.593023 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:25:31.594066 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:25:31.595315 systemd[1]: Stopped target timers.target. Feb 12 20:25:31.596382 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:25:31.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.596477 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:25:31.597505 systemd[1]: Stopped target initrd.target. Feb 12 20:25:31.601852 kernel: audit: type=1131 audit(1707769531.596:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.600070 systemd[1]: Stopped target basic.target. Feb 12 20:25:31.601414 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:25:31.602643 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:25:31.603207 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:25:31.603483 systemd[1]: Stopped target remote-fs.target. Feb 12 20:25:31.603900 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:25:31.604195 systemd[1]: Stopped target sysinit.target. Feb 12 20:25:31.604465 systemd[1]: Stopped target local-fs.target. Feb 12 20:25:31.604870 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:25:31.609950 systemd[1]: Stopped target swap.target. Feb 12 20:25:31.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.610522 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:25:31.615537 kernel: audit: type=1131 audit(1707769531.610:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.610661 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:25:31.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.611051 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:25:31.620039 kernel: audit: type=1131 audit(1707769531.615:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.614429 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:25:31.614566 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:25:31.616259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:25:31.616390 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:25:31.619120 systemd[1]: Stopped target paths.target. Feb 12 20:25:31.620633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:25:31.624556 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:25:31.625754 systemd[1]: Stopped target slices.target. Feb 12 20:25:31.625861 systemd[1]: Stopped target sockets.target. Feb 12 20:25:31.627297 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:25:31.627368 systemd[1]: Closed iscsid.socket. Feb 12 20:25:31.628225 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:25:31.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.628289 systemd[1]: Closed iscsiuio.socket. Feb 12 20:25:31.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.629251 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:25:31.629341 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:25:31.630320 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:25:31.630399 systemd[1]: Stopped ignition-files.service. Feb 12 20:25:31.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.632707 systemd[1]: Stopping ignition-mount.service... Feb 12 20:25:31.633320 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:25:31.633418 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:25:31.635035 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:25:31.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.635939 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:25:31.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.637056 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:25:31.637985 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:25:31.638109 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:25:31.639182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:25:31.639268 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:25:31.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.642566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:25:31.642641 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:25:31.646466 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:25:31.646555 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:25:31.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.650574 systemd-networkd[707]: eth0: DHCPv6 lease lost Feb 12 20:25:31.652082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:25:31.652000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:25:31.653032 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:25:31.653125 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:25:31.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.654080 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:25:31.654105 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:25:31.657000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:25:31.656707 systemd[1]: Stopping network-cleanup.service... Feb 12 20:25:31.658555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:25:31.659251 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:25:31.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.660722 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:25:31.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.663833 ignition[863]: INFO : Ignition 2.14.0 Feb 12 20:25:31.663833 ignition[863]: INFO : Stage: umount Feb 12 20:25:31.663833 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:25:31.663833 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:25:31.663833 ignition[863]: INFO : umount: umount passed Feb 12 20:25:31.663833 ignition[863]: INFO : Ignition finished successfully Feb 12 20:25:31.660758 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:25:31.662120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:25:31.662151 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:25:31.669732 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:25:31.670985 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:25:31.671695 systemd[1]: Stopped ignition-mount.service. Feb 12 20:25:31.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.672951 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:25:31.673636 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:25:31.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.675279 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:25:31.675322 systemd[1]: Stopped ignition-disks.service. Feb 12 20:25:31.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.677094 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:25:31.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.677126 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:25:31.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.678358 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:25:31.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.678387 systemd[1]: Stopped ignition-setup.service. Feb 12 20:25:31.679471 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:25:31.679499 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:25:31.680814 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:25:31.684770 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:25:31.685435 systemd[1]: Stopped network-cleanup.service. Feb 12 20:25:31.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.690341 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:25:31.691086 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:25:31.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.692464 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:25:31.692498 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:25:31.694317 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:25:31.694345 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:25:31.695650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:25:31.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.695680 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:25:31.696891 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:25:31.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.697540 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:25:31.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.699267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:25:31.699301 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:25:31.702123 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:25:31.703401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:25:31.703441 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:25:31.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.707105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:25:31.707891 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:25:31.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.709211 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:25:31.711049 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:25:31.716644 systemd[1]: Switching root. Feb 12 20:25:31.717000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:25:31.717000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:25:31.717000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:25:31.719000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:25:31.719000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:25:31.735531 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Feb 12 20:25:31.735585 iscsid[712]: iscsid shutting down. Feb 12 20:25:31.736146 systemd-journald[199]: Journal stopped Feb 12 20:25:36.171210 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:25:36.171256 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:25:36.171266 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:25:36.171276 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:25:36.171290 kernel: SELinux: policy capability open_perms=1 Feb 12 20:25:36.171299 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:25:36.171308 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:25:36.171317 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:25:36.171329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:25:36.171341 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:25:36.171350 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:25:36.171361 systemd[1]: Successfully loaded SELinux policy in 35.691ms. Feb 12 20:25:36.171378 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.344ms. Feb 12 20:25:36.171389 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:25:36.171401 systemd[1]: Detected virtualization kvm. Feb 12 20:25:36.171411 systemd[1]: Detected architecture x86-64. Feb 12 20:25:36.171423 systemd[1]: Detected first boot. Feb 12 20:25:36.171433 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:25:36.171443 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:25:36.171914 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:25:36.171932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:36.171945 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:36.171958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:36.171973 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:25:36.171983 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:25:36.171993 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:25:36.172003 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:25:36.172014 systemd[1]: Created slice system-getty.slice. Feb 12 20:25:36.172024 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:25:36.172034 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:25:36.172043 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:25:36.172055 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:25:36.172066 systemd[1]: Created slice user.slice. Feb 12 20:25:36.172075 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:25:36.172085 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:25:36.172095 systemd[1]: Set up automount boot.automount. Feb 12 20:25:36.172105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:25:36.172114 systemd[1]: Reached target integritysetup.target. Feb 12 20:25:36.172124 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:25:36.172134 systemd[1]: Reached target remote-fs.target. Feb 12 20:25:36.172144 systemd[1]: Reached target slices.target. Feb 12 20:25:36.172155 systemd[1]: Reached target swap.target. Feb 12 20:25:36.172165 systemd[1]: Reached target torcx.target. Feb 12 20:25:36.172174 systemd[1]: Reached target veritysetup.target. Feb 12 20:25:36.172185 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:25:36.172194 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:25:36.172204 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:25:36.172214 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:25:36.172223 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:25:36.172233 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:25:36.172244 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:25:36.172254 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:25:36.172265 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:25:36.172274 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:25:36.172285 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:25:36.172297 systemd[1]: Mounting media.mount... Feb 12 20:25:36.172306 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:25:36.172317 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:25:36.172326 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:25:36.172337 systemd[1]: Mounting tmp.mount... Feb 12 20:25:36.172347 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:25:36.172357 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:25:36.172367 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:25:36.172377 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:25:36.172386 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:25:36.172396 systemd[1]: Starting modprobe@drm.service... Feb 12 20:25:36.172406 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:25:36.172416 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:25:36.172426 systemd[1]: Starting modprobe@loop.service... Feb 12 20:25:36.172436 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:25:36.172447 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:25:36.172456 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:25:36.172466 systemd[1]: Starting systemd-journald.service... Feb 12 20:25:36.172476 kernel: loop: module loaded Feb 12 20:25:36.172485 kernel: fuse: init (API version 7.34) Feb 12 20:25:36.172495 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:25:36.172505 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:25:36.172527 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:25:36.172537 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:25:36.172548 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:25:36.172558 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:25:36.172570 systemd-journald[1002]: Journal started Feb 12 20:25:36.172609 systemd-journald[1002]: Runtime Journal (/run/log/journal/b9b5de3a2e3041cd82fa00a096c73581) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:25:36.097000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:25:36.097000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:25:36.169000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:25:36.169000 audit[1002]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc1132b730 a2=4000 a3=7ffc1132b7cc items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:36.169000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:25:36.177753 systemd[1]: Started systemd-journald.service. Feb 12 20:25:36.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.178860 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:25:36.179744 systemd[1]: Mounted media.mount. Feb 12 20:25:36.180584 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:25:36.181435 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:25:36.182392 systemd[1]: Mounted tmp.mount. Feb 12 20:25:36.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.183547 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:25:36.184661 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:25:36.184875 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:25:36.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.186248 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:25:36.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.187259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:25:36.187435 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:25:36.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.188322 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:25:36.188483 systemd[1]: Finished modprobe@drm.service. Feb 12 20:25:36.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.189414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:25:36.189584 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:25:36.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.190383 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:25:36.190605 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:25:36.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.191414 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:25:36.191580 systemd[1]: Finished modprobe@loop.service. Feb 12 20:25:36.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.192736 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:25:36.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.193900 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:25:36.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.195157 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:25:36.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.196358 systemd[1]: Reached target network-pre.target. Feb 12 20:25:36.198139 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:25:36.200361 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:25:36.201127 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:25:36.203466 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:25:36.205988 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:25:36.207027 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:25:36.208288 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:25:36.209466 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:25:36.210732 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:36.212735 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:25:36.213657 systemd-journald[1002]: Time spent on flushing to /var/log/journal/b9b5de3a2e3041cd82fa00a096c73581 is 14.032ms for 1061 entries. Feb 12 20:25:36.213657 systemd-journald[1002]: System Journal (/var/log/journal/b9b5de3a2e3041cd82fa00a096c73581) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:25:36.237611 systemd-journald[1002]: Received client request to flush runtime journal. Feb 12 20:25:36.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.217094 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:25:36.218243 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:25:36.224940 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:25:36.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.225986 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:25:36.231149 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:25:36.232943 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:25:36.233983 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:36.238446 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:25:36.242931 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:25:36.247678 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:25:36.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.249674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:25:36.265942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:25:36.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.066667 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:25:37.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.068127 kernel: kauditd_printk_skb: 74 callbacks suppressed Feb 12 20:25:37.068169 kernel: audit: type=1130 audit(1707769537.066:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.068546 systemd[1]: Starting systemd-udevd.service... Feb 12 20:25:37.118120 systemd-udevd[1060]: Using default interface naming scheme 'v252'. Feb 12 20:25:37.129750 systemd[1]: Started systemd-udevd.service. Feb 12 20:25:37.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.144303 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:25:37.144529 kernel: audit: type=1130 audit(1707769537.141:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.168747 systemd[1]: Starting systemd-networkd.service... Feb 12 20:25:37.195542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:25:37.205037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:25:37.201000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:25:37.210524 kernel: audit: type=1400 audit(1707769537.201:111): avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:25:37.220968 kernel: audit: type=1300 audit(1707769537.201:111): arch=c000003e syscall=175 success=yes exit=0 a0=56229a2de840 a1=32194 a2=7f7f4be38bc5 a3=5 items=108 ppid=1060 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:37.221003 kernel: audit: type=1307 audit(1707769537.201:111): cwd="/" Feb 12 20:25:37.221017 kernel: audit: type=1302 audit(1707769537.201:111): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.221030 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:25:37.221043 kernel: audit: type=1302 audit(1707769537.201:111): item=1 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.221055 kernel: audit: type=1302 audit(1707769537.201:111): item=2 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56229a2de840 a1=32194 a2=7f7f4be38bc5 a3=5 items=108 ppid=1060 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:37.201000 audit: CWD cwd="/" Feb 12 20:25:37.201000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=1 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=2 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=3 name=(null) inode=11824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.227341 kernel: audit: type=1302 audit(1707769537.201:111): item=3 name=(null) inode=11824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.227375 kernel: audit: type=1302 audit(1707769537.201:111): item=4 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=4 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=5 name=(null) inode=11825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=6 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=7 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=8 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=9 name=(null) inode=11827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=10 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=11 name=(null) inode=11828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=12 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=13 name=(null) inode=11829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=14 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=15 name=(null) inode=11830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=16 name=(null) inode=11826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=17 name=(null) inode=11831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=18 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=19 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=20 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=21 name=(null) inode=11833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=22 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=23 name=(null) inode=11834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=24 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=25 name=(null) inode=11835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=26 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=27 name=(null) inode=11836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=28 name=(null) inode=11832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=29 name=(null) inode=11837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=30 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=31 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=32 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=33 name=(null) inode=11839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=34 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=35 name=(null) inode=11840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=36 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=37 name=(null) inode=11841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=38 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=39 name=(null) inode=11842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=40 name=(null) inode=11838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=41 name=(null) inode=11843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=42 name=(null) inode=11823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=43 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=44 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=45 name=(null) inode=11845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=46 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=47 name=(null) inode=11846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=48 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=49 name=(null) inode=11847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=50 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=51 name=(null) inode=11848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=52 name=(null) inode=11844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=53 name=(null) inode=11849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=55 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=56 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=57 name=(null) inode=11851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=58 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=59 name=(null) inode=11852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=60 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=61 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=62 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=63 name=(null) inode=11854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=64 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=65 name=(null) inode=11855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.233225 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:25:37.201000 audit: PATH item=66 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=67 name=(null) inode=11856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=68 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=69 name=(null) inode=11857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=70 name=(null) inode=11853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=71 name=(null) inode=11858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=72 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=73 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=74 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=75 name=(null) inode=11860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=76 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=77 name=(null) inode=11861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=78 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=79 name=(null) inode=11862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=80 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=81 name=(null) inode=11863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=82 name=(null) inode=11859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=83 name=(null) inode=11864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=84 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=85 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=86 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=87 name=(null) inode=11866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=88 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=89 name=(null) inode=11867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=90 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=91 name=(null) inode=11868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=92 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=93 name=(null) inode=11869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=94 name=(null) inode=11865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=95 name=(null) inode=11870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=96 name=(null) inode=11850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=97 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=98 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=99 name=(null) inode=11872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=100 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=101 name=(null) inode=11873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=102 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=103 name=(null) inode=11874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=104 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=105 name=(null) inode=11875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=106 name=(null) inode=11871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PATH item=107 name=(null) inode=11876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:37.201000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:25:37.263535 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:25:37.264533 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:25:37.265801 systemd[1]: Started systemd-userdbd.service. Feb 12 20:25:37.281532 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:25:37.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.332628 kernel: kvm: Nested Virtualization enabled Feb 12 20:25:37.332714 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:25:37.333637 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:25:37.333672 kernel: SVM: Virtual GIF supported Feb 12 20:25:37.349541 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:25:37.351010 systemd-networkd[1084]: lo: Link UP Feb 12 20:25:37.351020 systemd-networkd[1084]: lo: Gained carrier Feb 12 20:25:37.351373 systemd-networkd[1084]: Enumeration completed Feb 12 20:25:37.351462 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:25:37.351553 systemd[1]: Started systemd-networkd.service. Feb 12 20:25:37.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.361956 systemd-networkd[1084]: eth0: Link UP Feb 12 20:25:37.361964 systemd-networkd[1084]: eth0: Gained carrier Feb 12 20:25:37.378951 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:25:37.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.380834 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:25:37.381865 systemd-networkd[1084]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:25:37.389400 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:37.414773 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:25:37.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.415821 systemd[1]: Reached target cryptsetup.target. Feb 12 20:25:37.417711 systemd[1]: Starting lvm2-activation.service... Feb 12 20:25:37.422244 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:25:37.478574 systemd[1]: Finished lvm2-activation.service. Feb 12 20:25:37.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.479533 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:25:37.480246 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:25:37.480266 systemd[1]: Reached target local-fs.target. Feb 12 20:25:37.480972 systemd[1]: Reached target machines.target. Feb 12 20:25:37.483432 systemd[1]: Starting ldconfig.service... Feb 12 20:25:37.484359 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:25:37.484409 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:37.485561 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:25:37.487399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:25:37.489376 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:25:37.490102 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:37.490139 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:25:37.491266 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:25:37.492190 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Feb 12 20:25:37.493399 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:25:37.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.499854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:25:37.507761 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:25:37.508479 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:25:37.510206 systemd-tmpfiles[1105]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:25:37.551872 systemd-fsck[1111]: fsck.fat 4.2 (2021-01-31) Feb 12 20:25:37.551872 systemd-fsck[1111]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:25:37.553980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:25:37.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:37.556557 systemd[1]: Mounting boot.mount... Feb 12 20:25:37.590342 systemd[1]: Mounted boot.mount. Feb 12 20:25:37.640735 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:25:37.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:38.093217 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:25:38.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:38.096038 systemd[1]: Starting audit-rules.service... Feb 12 20:25:38.112435 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:25:38.131000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:25:38.131000 audit[1137]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9d03c780 a2=420 a3=0 items=0 ppid=1119 pid=1137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.131000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:25:38.132677 augenrules[1137]: No rules Feb 12 20:25:38.135565 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:25:38.138001 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:25:38.139835 systemd[1]: Starting systemd-resolved.service... Feb 12 20:25:38.141724 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:25:38.143581 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:25:38.147372 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:25:38.148067 systemd[1]: Finished ldconfig.service. Feb 12 20:25:38.149055 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:25:38.150676 systemd[1]: Finished audit-rules.service. Feb 12 20:25:38.151650 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:25:38.156143 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:25:38.157478 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:25:38.164311 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:25:38.166542 systemd[1]: Starting systemd-update-done.service... Feb 12 20:25:38.205664 systemd[1]: Finished systemd-update-done.service. Feb 12 20:25:38.223920 systemd-resolved[1144]: Positive Trust Anchors: Feb 12 20:25:38.223939 systemd-resolved[1144]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:25:38.223977 systemd-resolved[1144]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:25:38.224597 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:25:38.225537 systemd[1]: Reached target time-set.target. Feb 12 20:25:38.773314 systemd-timesyncd[1145]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:25:38.773354 systemd-timesyncd[1145]: Initial clock synchronization to Mon 2024-02-12 20:25:38.773242 UTC. Feb 12 20:25:38.782146 systemd-resolved[1144]: Defaulting to hostname 'linux'. Feb 12 20:25:38.783814 systemd[1]: Started systemd-resolved.service. Feb 12 20:25:38.784464 systemd[1]: Reached target network.target. Feb 12 20:25:38.785004 systemd[1]: Reached target nss-lookup.target. Feb 12 20:25:38.785605 systemd[1]: Reached target sysinit.target. Feb 12 20:25:38.786197 systemd[1]: Started motdgen.path. Feb 12 20:25:38.786691 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:25:38.787537 systemd[1]: Started logrotate.timer. Feb 12 20:25:38.788099 systemd[1]: Started mdadm.timer. Feb 12 20:25:38.788567 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:25:38.789133 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:25:38.789155 systemd[1]: Reached target paths.target. Feb 12 20:25:38.789689 systemd[1]: Reached target timers.target. Feb 12 20:25:38.790453 systemd[1]: Listening on dbus.socket. Feb 12 20:25:38.792267 systemd[1]: Starting docker.socket... Feb 12 20:25:38.793652 systemd[1]: Listening on sshd.socket. Feb 12 20:25:38.794330 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:38.794655 systemd[1]: Listening on docker.socket. Feb 12 20:25:38.795232 systemd[1]: Reached target sockets.target. Feb 12 20:25:38.795952 systemd[1]: Reached target basic.target. Feb 12 20:25:38.796648 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:25:38.796692 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:38.796713 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:25:38.797841 systemd[1]: Starting containerd.service... Feb 12 20:25:38.799693 systemd[1]: Starting dbus.service... Feb 12 20:25:38.801245 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:25:38.803608 systemd[1]: Starting extend-filesystems.service... Feb 12 20:25:38.804373 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:25:38.805743 systemd[1]: Starting motdgen.service... Feb 12 20:25:38.806695 jq[1158]: false Feb 12 20:25:38.807479 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:25:38.808969 systemd[1]: Starting prepare-critools.service... Feb 12 20:25:38.810428 systemd[1]: Starting prepare-helm.service... Feb 12 20:25:38.811999 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:25:38.813731 systemd[1]: Starting sshd-keygen.service... Feb 12 20:25:38.816048 systemd[1]: Starting systemd-logind.service... Feb 12 20:25:38.817151 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:25:38.817215 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:25:38.818236 systemd[1]: Starting update-engine.service... Feb 12 20:25:38.819734 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:25:38.823000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:25:38.823228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:25:38.829377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:25:38.838388 tar[1179]: ./ Feb 12 20:25:38.838388 tar[1179]: ./macvlan Feb 12 20:25:38.829766 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:25:38.837771 dbus-daemon[1156]: [system] SELinux support is enabled Feb 12 20:25:38.838399 systemd[1]: Started dbus.service. Feb 12 20:25:38.839288 jq[1176]: true Feb 12 20:25:38.840919 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:25:38.840945 systemd[1]: Reached target system-config.target. Feb 12 20:25:38.841620 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:25:38.841632 systemd[1]: Reached target user-config.target. Feb 12 20:25:38.851736 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:25:38.852055 systemd[1]: Finished motdgen.service. Feb 12 20:25:38.853901 tar[1182]: linux-amd64/helm Feb 12 20:25:38.858365 tar[1180]: crictl Feb 12 20:25:38.861241 jq[1190]: true Feb 12 20:25:38.863929 extend-filesystems[1159]: Found sr0 Feb 12 20:25:38.866360 extend-filesystems[1159]: Found vda Feb 12 20:25:38.866901 extend-filesystems[1159]: Found vda1 Feb 12 20:25:38.866901 extend-filesystems[1159]: Found vda2 Feb 12 20:25:38.867899 extend-filesystems[1159]: Found vda3 Feb 12 20:25:38.867899 extend-filesystems[1159]: Found usr Feb 12 20:25:38.867899 extend-filesystems[1159]: Found vda4 Feb 12 20:25:38.869506 extend-filesystems[1159]: Found vda6 Feb 12 20:25:38.869506 extend-filesystems[1159]: Found vda7 Feb 12 20:25:38.869506 extend-filesystems[1159]: Found vda9 Feb 12 20:25:38.869506 extend-filesystems[1159]: Checking size of /dev/vda9 Feb 12 20:25:38.884343 extend-filesystems[1159]: Resized partition /dev/vda9 Feb 12 20:25:38.892037 extend-filesystems[1215]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:25:38.896226 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:25:38.912211 update_engine[1175]: I0212 20:25:38.912024 1175 main.cc:92] Flatcar Update Engine starting Feb 12 20:25:38.928299 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:25:38.950738 systemd[1]: Started update-engine.service. Feb 12 20:25:38.951640 update_engine[1175]: I0212 20:25:38.950845 1175 update_check_scheduler.cc:74] Next update check in 11m16s Feb 12 20:25:38.953102 systemd[1]: Started locksmithd.service. Feb 12 20:25:38.967980 extend-filesystems[1215]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:25:38.967980 extend-filesystems[1215]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:25:38.967980 extend-filesystems[1215]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:25:38.983409 extend-filesystems[1159]: Resized filesystem in /dev/vda9 Feb 12 20:25:38.986092 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:25:38.968502 systemd-logind[1172]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:25:38.986392 tar[1179]: ./static Feb 12 20:25:38.968521 systemd-logind[1172]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:25:38.968711 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:25:38.968980 systemd[1]: Finished extend-filesystems.service. Feb 12 20:25:38.969051 systemd-logind[1172]: New seat seat0. Feb 12 20:25:38.973717 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:25:38.981471 systemd[1]: Started systemd-logind.service. Feb 12 20:25:39.006788 env[1189]: time="2024-02-12T20:25:39.006725102Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:25:39.017189 tar[1179]: ./vlan Feb 12 20:25:39.055641 tar[1179]: ./portmap Feb 12 20:25:39.075313 env[1189]: time="2024-02-12T20:25:39.075229006Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:25:39.075517 env[1189]: time="2024-02-12T20:25:39.075469557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.086708 env[1189]: time="2024-02-12T20:25:39.086638617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.086708 env[1189]: time="2024-02-12T20:25:39.086706524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087159 env[1189]: time="2024-02-12T20:25:39.087126923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087159 env[1189]: time="2024-02-12T20:25:39.087156157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087252 env[1189]: time="2024-02-12T20:25:39.087190692Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:25:39.087252 env[1189]: time="2024-02-12T20:25:39.087203917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087326 env[1189]: time="2024-02-12T20:25:39.087306429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087638 env[1189]: time="2024-02-12T20:25:39.087612323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087847 env[1189]: time="2024-02-12T20:25:39.087818830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:25:39.087847 env[1189]: time="2024-02-12T20:25:39.087844318Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:25:39.087939 env[1189]: time="2024-02-12T20:25:39.087913438Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:25:39.087981 env[1189]: time="2024-02-12T20:25:39.087935489Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:25:39.094701 tar[1179]: ./host-local Feb 12 20:25:39.095297 env[1189]: time="2024-02-12T20:25:39.095259076Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:25:39.095346 env[1189]: time="2024-02-12T20:25:39.095306615Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:25:39.095346 env[1189]: time="2024-02-12T20:25:39.095326392Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:25:39.095385 env[1189]: time="2024-02-12T20:25:39.095363562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095385 env[1189]: time="2024-02-12T20:25:39.095381085Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095440 env[1189]: time="2024-02-12T20:25:39.095397085Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095440 env[1189]: time="2024-02-12T20:25:39.095412704Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095440 env[1189]: time="2024-02-12T20:25:39.095429356Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095505 env[1189]: time="2024-02-12T20:25:39.095445907Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095505 env[1189]: time="2024-02-12T20:25:39.095464111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095505 env[1189]: time="2024-02-12T20:25:39.095489749Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.095562 env[1189]: time="2024-02-12T20:25:39.095506009Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:25:39.095645 env[1189]: time="2024-02-12T20:25:39.095620274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:25:39.095733 env[1189]: time="2024-02-12T20:25:39.095707056Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:25:39.096101 env[1189]: time="2024-02-12T20:25:39.096075287Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:25:39.096145 env[1189]: time="2024-02-12T20:25:39.096109772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096145 env[1189]: time="2024-02-12T20:25:39.096126714Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.096203 env[1189]: time="2024-02-12T20:25:39.096194120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096226 env[1189]: time="2024-02-12T20:25:39.096210791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096249 env[1189]: time="2024-02-12T20:25:39.096227873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096270 env[1189]: time="2024-02-12T20:25:39.096245276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096270 env[1189]: time="2024-02-12T20:25:39.096262458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096309 env[1189]: time="2024-02-12T20:25:39.096278047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096309 env[1189]: time="2024-02-12T20:25:39.096292615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096352 env[1189]: time="2024-02-12T20:25:39.096307383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096352 env[1189]: time="2024-02-12T20:25:39.096324314Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.096470 env[1189]: time="2024-02-12T20:25:39.096445602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096526 env[1189]: time="2024-02-12T20:25:39.096483874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096526 env[1189]: time="2024-02-12T20:25:39.096502839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096526 env[1189]: time="2024-02-12T20:25:39.096520092Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:25:39.096583 env[1189]: time="2024-02-12T20:25:39.096540840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:25:39.096583 env[1189]: time="2024-02-12T20:25:39.096554696Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:25:39.096583 env[1189]: time="2024-02-12T20:25:39.096577810Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:25:39.096646 env[1189]: time="2024-02-12T20:25:39.096618506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:25:39.096906 env[1189]: time="2024-02-12T20:25:39.096834882Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:25:39.097507 env[1189]: time="2024-02-12T20:25:39.096911566Z" level=info msg="Connect containerd service" Feb 12 20:25:39.097507 env[1189]: time="2024-02-12T20:25:39.096950709Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:25:39.097507 env[1189]: time="2024-02-12T20:25:39.097497745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:25:39.097904 env[1189]: time="2024-02-12T20:25:39.097880503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:25:39.097954 env[1189]: time="2024-02-12T20:25:39.097929756Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:25:39.097991 env[1189]: time="2024-02-12T20:25:39.097973939Z" level=info msg="containerd successfully booted in 0.092054s" Feb 12 20:25:39.098095 systemd[1]: Started containerd.service. Feb 12 20:25:39.102765 env[1189]: time="2024-02-12T20:25:39.102692649Z" level=info msg="Start subscribing containerd event" Feb 12 20:25:39.103211 env[1189]: time="2024-02-12T20:25:39.103188750Z" level=info msg="Start recovering state" Feb 12 20:25:39.103322 env[1189]: time="2024-02-12T20:25:39.103298676Z" level=info msg="Start event monitor" Feb 12 20:25:39.103376 env[1189]: time="2024-02-12T20:25:39.103322651Z" level=info msg="Start snapshots syncer" Feb 12 20:25:39.103376 env[1189]: time="2024-02-12T20:25:39.103346776Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:25:39.103376 env[1189]: time="2024-02-12T20:25:39.103361775Z" level=info msg="Start streaming server" Feb 12 20:25:39.132018 tar[1179]: ./vrf Feb 12 20:25:39.167653 tar[1179]: ./bridge Feb 12 20:25:39.210224 tar[1179]: ./tuning Feb 12 20:25:39.243687 tar[1179]: ./firewall Feb 12 20:25:39.292934 tar[1179]: ./host-device Feb 12 20:25:39.372625 tar[1179]: ./sbr Feb 12 20:25:39.406166 tar[1179]: ./loopback Feb 12 20:25:39.437758 tar[1179]: ./dhcp Feb 12 20:25:39.539116 systemd[1]: Finished prepare-critools.service. Feb 12 20:25:39.542139 tar[1179]: ./ptp Feb 12 20:25:39.555206 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:25:39.574622 tar[1179]: ./ipvlan Feb 12 20:25:39.618137 tar[1182]: linux-amd64/LICENSE Feb 12 20:25:39.618310 tar[1182]: linux-amd64/README.md Feb 12 20:25:39.622215 systemd[1]: Finished prepare-helm.service. Feb 12 20:25:39.625270 tar[1179]: ./bandwidth Feb 12 20:25:39.662564 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:25:39.683072 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:25:39.700621 systemd[1]: Finished sshd-keygen.service. Feb 12 20:25:39.702633 systemd[1]: Starting issuegen.service... Feb 12 20:25:39.706933 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:25:39.707085 systemd[1]: Finished issuegen.service. Feb 12 20:25:39.708734 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:25:39.713148 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:25:39.714871 systemd[1]: Started getty@tty1.service. Feb 12 20:25:39.716349 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:25:39.717137 systemd[1]: Reached target getty.target. Feb 12 20:25:39.717816 systemd[1]: Reached target multi-user.target. Feb 12 20:25:39.719502 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:25:39.726901 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:25:39.727079 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:25:39.728015 systemd[1]: Startup finished in 6.607s (kernel) + 7.407s (userspace) = 14.014s. Feb 12 20:25:39.857363 systemd-networkd[1084]: eth0: Gained IPv6LL Feb 12 20:25:48.432073 systemd[1]: Created slice system-sshd.slice. Feb 12 20:25:48.433345 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:50210.service. Feb 12 20:25:48.469816 sshd[1269]: Accepted publickey for core from 10.0.0.1 port 50210 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:48.471107 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.479007 systemd[1]: Created slice user-500.slice. Feb 12 20:25:48.479884 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:25:48.481527 systemd-logind[1172]: New session 1 of user core. Feb 12 20:25:48.488079 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:25:48.489210 systemd[1]: Starting user@500.service... Feb 12 20:25:48.492687 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.565066 systemd[1274]: Queued start job for default target default.target. Feb 12 20:25:48.565327 systemd[1274]: Reached target paths.target. Feb 12 20:25:48.565342 systemd[1274]: Reached target sockets.target. Feb 12 20:25:48.565353 systemd[1274]: Reached target timers.target. Feb 12 20:25:48.565363 systemd[1274]: Reached target basic.target. Feb 12 20:25:48.565419 systemd[1274]: Reached target default.target. Feb 12 20:25:48.565450 systemd[1274]: Startup finished in 67ms. Feb 12 20:25:48.565529 systemd[1]: Started user@500.service. Feb 12 20:25:48.566471 systemd[1]: Started session-1.scope. Feb 12 20:25:48.616638 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:50220.service. Feb 12 20:25:48.648303 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 50220 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:48.649400 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.653200 systemd-logind[1172]: New session 2 of user core. Feb 12 20:25:48.653998 systemd[1]: Started session-2.scope. Feb 12 20:25:48.707570 sshd[1283]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:48.709838 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:50228.service. Feb 12 20:25:48.710405 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:50220.service: Deactivated successfully. Feb 12 20:25:48.711300 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:25:48.711318 systemd-logind[1172]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:25:48.712014 systemd-logind[1172]: Removed session 2. Feb 12 20:25:48.737954 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 50228 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:48.738701 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.741854 systemd-logind[1172]: New session 3 of user core. Feb 12 20:25:48.742605 systemd[1]: Started session-3.scope. Feb 12 20:25:48.791726 sshd[1289]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:48.793717 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:50242.service. Feb 12 20:25:48.794502 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:50228.service: Deactivated successfully. Feb 12 20:25:48.795188 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:25:48.795234 systemd-logind[1172]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:25:48.795968 systemd-logind[1172]: Removed session 3. Feb 12 20:25:48.822922 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:48.823817 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.826796 systemd-logind[1172]: New session 4 of user core. Feb 12 20:25:48.827653 systemd[1]: Started session-4.scope. Feb 12 20:25:48.880627 sshd[1295]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:48.882753 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:50248.service. Feb 12 20:25:48.883415 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:50242.service: Deactivated successfully. Feb 12 20:25:48.884227 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:25:48.884574 systemd-logind[1172]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:25:48.885210 systemd-logind[1172]: Removed session 4. Feb 12 20:25:48.912477 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 50248 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:25:48.913320 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:48.916630 systemd-logind[1172]: New session 5 of user core. Feb 12 20:25:48.917516 systemd[1]: Started session-5.scope. Feb 12 20:25:48.971071 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:25:48.971248 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:25:49.488403 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:25:49.493128 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:25:49.493432 systemd[1]: Reached target network-online.target. Feb 12 20:25:49.494759 systemd[1]: Starting docker.service... Feb 12 20:25:49.524492 env[1327]: time="2024-02-12T20:25:49.524437524Z" level=info msg="Starting up" Feb 12 20:25:49.525520 env[1327]: time="2024-02-12T20:25:49.525495538Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:25:49.525566 env[1327]: time="2024-02-12T20:25:49.525522990Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:25:49.525566 env[1327]: time="2024-02-12T20:25:49.525539451Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:25:49.525566 env[1327]: time="2024-02-12T20:25:49.525547997Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:25:49.526930 env[1327]: time="2024-02-12T20:25:49.526907076Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:25:49.526930 env[1327]: time="2024-02-12T20:25:49.526923918Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:25:49.526993 env[1327]: time="2024-02-12T20:25:49.526936782Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:25:49.526993 env[1327]: time="2024-02-12T20:25:49.526946129Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:25:49.532598 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2195825585-merged.mount: Deactivated successfully. Feb 12 20:25:50.734126 env[1327]: time="2024-02-12T20:25:50.734065641Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 20:25:50.734126 env[1327]: time="2024-02-12T20:25:50.734100706Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 20:25:50.734775 env[1327]: time="2024-02-12T20:25:50.734270795Z" level=info msg="Loading containers: start." Feb 12 20:25:50.831205 kernel: Initializing XFRM netlink socket Feb 12 20:25:50.857691 env[1327]: time="2024-02-12T20:25:50.857640031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:25:50.902414 systemd-networkd[1084]: docker0: Link UP Feb 12 20:25:50.916184 env[1327]: time="2024-02-12T20:25:50.916121145Z" level=info msg="Loading containers: done." Feb 12 20:25:50.927743 env[1327]: time="2024-02-12T20:25:50.927697348Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:25:50.927897 env[1327]: time="2024-02-12T20:25:50.927877085Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:25:50.927992 env[1327]: time="2024-02-12T20:25:50.927973697Z" level=info msg="Daemon has completed initialization" Feb 12 20:25:50.951101 systemd[1]: Started docker.service. Feb 12 20:25:50.955322 env[1327]: time="2024-02-12T20:25:50.955253716Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:25:50.975296 systemd[1]: Reloading. Feb 12 20:25:51.042386 /usr/lib/systemd/system-generators/torcx-generator[1469]: time="2024-02-12T20:25:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:25:51.042420 /usr/lib/systemd/system-generators/torcx-generator[1469]: time="2024-02-12T20:25:51Z" level=info msg="torcx already run" Feb 12 20:25:51.115651 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:25:51.115670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:25:51.135495 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:25:51.210890 systemd[1]: Started kubelet.service. Feb 12 20:25:51.265582 kubelet[1515]: E0212 20:25:51.265514 1515 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:25:51.267744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:25:51.267905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:25:51.599842 env[1189]: time="2024-02-12T20:25:51.599785333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:25:52.910121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084583397.mount: Deactivated successfully. Feb 12 20:25:57.732397 env[1189]: time="2024-02-12T20:25:57.732330455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:57.764444 env[1189]: time="2024-02-12T20:25:57.764378809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:57.784989 env[1189]: time="2024-02-12T20:25:57.784929115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:57.813769 env[1189]: time="2024-02-12T20:25:57.813718756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:57.814756 env[1189]: time="2024-02-12T20:25:57.814718141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 20:25:57.953253 env[1189]: time="2024-02-12T20:25:57.953208639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:26:01.518767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:26:01.518962 systemd[1]: Stopped kubelet.service. Feb 12 20:26:01.520300 systemd[1]: Started kubelet.service. Feb 12 20:26:01.583116 kubelet[1547]: E0212 20:26:01.583062 1547 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:26:01.586669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:26:01.586841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:26:03.263891 env[1189]: time="2024-02-12T20:26:03.263819047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:03.302544 env[1189]: time="2024-02-12T20:26:03.302474413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:03.316885 env[1189]: time="2024-02-12T20:26:03.316803049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:03.334082 env[1189]: time="2024-02-12T20:26:03.334020244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:03.334941 env[1189]: time="2024-02-12T20:26:03.334881600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 20:26:03.349627 env[1189]: time="2024-02-12T20:26:03.349589848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:26:06.241062 env[1189]: time="2024-02-12T20:26:06.240991792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:06.295841 env[1189]: time="2024-02-12T20:26:06.295769178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:06.337282 env[1189]: time="2024-02-12T20:26:06.337225298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:06.372893 env[1189]: time="2024-02-12T20:26:06.372825564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:06.373734 env[1189]: time="2024-02-12T20:26:06.373658977Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 20:26:06.385804 env[1189]: time="2024-02-12T20:26:06.385732684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:26:08.742037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824505474.mount: Deactivated successfully. Feb 12 20:26:09.679873 env[1189]: time="2024-02-12T20:26:09.679812336Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:09.820661 env[1189]: time="2024-02-12T20:26:09.820577441Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:09.863335 env[1189]: time="2024-02-12T20:26:09.863278486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:09.895973 env[1189]: time="2024-02-12T20:26:09.895918960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:09.896509 env[1189]: time="2024-02-12T20:26:09.896474673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:26:09.905502 env[1189]: time="2024-02-12T20:26:09.905469554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:26:11.778324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 20:26:11.778538 systemd[1]: Stopped kubelet.service. Feb 12 20:26:11.779993 systemd[1]: Started kubelet.service. Feb 12 20:26:11.831309 kubelet[1574]: E0212 20:26:11.831252 1574 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:26:11.832855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:26:11.833027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:26:12.043334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24489407.mount: Deactivated successfully. Feb 12 20:26:12.262460 env[1189]: time="2024-02-12T20:26:12.262404860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:12.304139 env[1189]: time="2024-02-12T20:26:12.304023794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:12.373058 env[1189]: time="2024-02-12T20:26:12.372989049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:12.411537 env[1189]: time="2024-02-12T20:26:12.411498425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:12.412095 env[1189]: time="2024-02-12T20:26:12.412069986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:26:12.421597 env[1189]: time="2024-02-12T20:26:12.421556754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:26:13.124450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632162190.mount: Deactivated successfully. Feb 12 20:26:18.316338 env[1189]: time="2024-02-12T20:26:18.316273381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:18.318195 env[1189]: time="2024-02-12T20:26:18.318137392Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:18.319628 env[1189]: time="2024-02-12T20:26:18.319592941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:18.321197 env[1189]: time="2024-02-12T20:26:18.321154523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:18.321863 env[1189]: time="2024-02-12T20:26:18.321832037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 20:26:18.330493 env[1189]: time="2024-02-12T20:26:18.330458510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:26:18.829690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780040110.mount: Deactivated successfully. Feb 12 20:26:19.514777 env[1189]: time="2024-02-12T20:26:19.514702159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:19.516642 env[1189]: time="2024-02-12T20:26:19.516613164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:19.518199 env[1189]: time="2024-02-12T20:26:19.518147431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:19.519523 env[1189]: time="2024-02-12T20:26:19.519499049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:19.520047 env[1189]: time="2024-02-12T20:26:19.520015865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 20:26:21.394323 systemd[1]: Stopped kubelet.service. Feb 12 20:26:21.404774 systemd[1]: Reloading. Feb 12 20:26:21.464691 /usr/lib/systemd/system-generators/torcx-generator[1685]: time="2024-02-12T20:26:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:26:21.464726 /usr/lib/systemd/system-generators/torcx-generator[1685]: time="2024-02-12T20:26:21Z" level=info msg="torcx already run" Feb 12 20:26:21.522005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:26:21.522019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:26:21.538086 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:26:21.608831 systemd[1]: Started kubelet.service. Feb 12 20:26:21.644764 kubelet[1733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:21.644764 kubelet[1733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:21.645196 kubelet[1733]: I0212 20:26:21.644903 1733 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:26:21.646041 kubelet[1733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:21.646041 kubelet[1733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:21.976015 kubelet[1733]: I0212 20:26:21.975913 1733 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:26:21.976015 kubelet[1733]: I0212 20:26:21.975941 1733 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:26:21.976210 kubelet[1733]: I0212 20:26:21.976158 1733 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:26:21.978637 kubelet[1733]: I0212 20:26:21.978603 1733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:26:21.979183 kubelet[1733]: E0212 20:26:21.979157 1733 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.982311 kubelet[1733]: I0212 20:26:21.982292 1733 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:26:21.982567 kubelet[1733]: I0212 20:26:21.982548 1733 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:26:21.982616 kubelet[1733]: I0212 20:26:21.982605 1733 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:26:21.982691 kubelet[1733]: I0212 20:26:21.982624 1733 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:26:21.982691 kubelet[1733]: I0212 20:26:21.982635 1733 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:26:21.982737 kubelet[1733]: I0212 20:26:21.982709 1733 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:21.985025 kubelet[1733]: I0212 20:26:21.985010 1733 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:26:21.985025 kubelet[1733]: I0212 20:26:21.985028 1733 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:26:21.985097 kubelet[1733]: I0212 20:26:21.985044 1733 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:26:21.985097 kubelet[1733]: I0212 20:26:21.985056 1733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:26:21.985730 kubelet[1733]: W0212 20:26:21.985684 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.985730 kubelet[1733]: E0212 20:26:21.985732 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.985825 kubelet[1733]: I0212 20:26:21.985798 1733 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:26:21.986008 kubelet[1733]: W0212 20:26:21.985989 1733 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:26:21.986336 kubelet[1733]: I0212 20:26:21.986319 1733 server.go:1186] "Started kubelet" Feb 12 20:26:21.987303 kubelet[1733]: E0212 20:26:21.987278 1733 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:26:21.987389 kubelet[1733]: E0212 20:26:21.987375 1733 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:26:21.987742 kubelet[1733]: W0212 20:26:21.987700 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.987809 kubelet[1733]: E0212 20:26:21.987745 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.987902 kubelet[1733]: I0212 20:26:21.987881 1733 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:26:21.988644 kubelet[1733]: E0212 20:26:21.988550 1733 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b33762fc82d051", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 26, 21, 986304081, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 26, 21, 986304081, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.91:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.91:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:26:21.988757 kubelet[1733]: I0212 20:26:21.988669 1733 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:26:21.990205 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:26:21.990283 kubelet[1733]: I0212 20:26:21.990266 1733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:26:21.991295 kubelet[1733]: I0212 20:26:21.991281 1733 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:26:21.991637 kubelet[1733]: I0212 20:26:21.991614 1733 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:26:21.992795 kubelet[1733]: W0212 20:26:21.992753 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.992795 kubelet[1733]: E0212 20:26:21.992797 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:21.992873 kubelet[1733]: E0212 20:26:21.992849 1733 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.019627 kubelet[1733]: I0212 20:26:22.019594 1733 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:26:22.026794 kubelet[1733]: I0212 20:26:22.026757 1733 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:26:22.026794 kubelet[1733]: I0212 20:26:22.026787 1733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:26:22.026890 kubelet[1733]: I0212 20:26:22.026802 1733 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:22.029568 kubelet[1733]: I0212 20:26:22.029548 1733 policy_none.go:49] "None policy: Start" Feb 12 20:26:22.030066 kubelet[1733]: I0212 20:26:22.030042 1733 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:26:22.030066 kubelet[1733]: I0212 20:26:22.030068 1733 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:26:22.036727 kubelet[1733]: I0212 20:26:22.036704 1733 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:26:22.036937 kubelet[1733]: I0212 20:26:22.036917 1733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:26:22.037933 kubelet[1733]: E0212 20:26:22.037915 1733 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 20:26:22.039723 kubelet[1733]: I0212 20:26:22.039698 1733 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:26:22.039723 kubelet[1733]: I0212 20:26:22.039721 1733 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:26:22.039805 kubelet[1733]: I0212 20:26:22.039735 1733 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:26:22.039805 kubelet[1733]: E0212 20:26:22.039797 1733 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:26:22.040333 kubelet[1733]: W0212 20:26:22.040282 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.040384 kubelet[1733]: E0212 20:26:22.040340 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.093275 kubelet[1733]: I0212 20:26:22.093246 1733 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:26:22.093655 kubelet[1733]: E0212 20:26:22.093623 1733 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 12 20:26:22.140829 kubelet[1733]: I0212 20:26:22.140781 1733 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:22.141961 kubelet[1733]: I0212 20:26:22.141948 1733 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:22.142796 kubelet[1733]: I0212 20:26:22.142760 1733 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:22.143377 kubelet[1733]: I0212 20:26:22.143317 1733 status_manager.go:698] "Failed to get status for pod" podUID=e61bfceaf3bc6dadf1999fc1fa06c931 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.91:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.91:6443: connect: connection refused" Feb 12 20:26:22.143683 kubelet[1733]: I0212 20:26:22.143652 1733 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.91:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.91:6443: connect: connection refused" Feb 12 20:26:22.144239 kubelet[1733]: I0212 20:26:22.144225 1733 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.91:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.91:6443: connect: connection refused" Feb 12 20:26:22.193851 kubelet[1733]: E0212 20:26:22.193814 1733 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.293150 kubelet[1733]: I0212 20:26:22.293127 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:22.293273 kubelet[1733]: I0212 20:26:22.293161 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:22.293273 kubelet[1733]: I0212 20:26:22.293201 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:22.293273 kubelet[1733]: I0212 20:26:22.293230 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:22.293354 kubelet[1733]: I0212 20:26:22.293274 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:22.293354 kubelet[1733]: I0212 20:26:22.293308 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:22.293354 kubelet[1733]: I0212 20:26:22.293326 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:22.293354 kubelet[1733]: I0212 20:26:22.293345 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:22.293440 kubelet[1733]: I0212 20:26:22.293369 1733 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:26:22.295045 kubelet[1733]: I0212 20:26:22.295021 1733 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:26:22.295348 kubelet[1733]: E0212 20:26:22.295333 1733 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 12 20:26:22.446099 kubelet[1733]: E0212 20:26:22.446056 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:22.446761 env[1189]: time="2024-02-12T20:26:22.446720584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e61bfceaf3bc6dadf1999fc1fa06c931,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:22.448980 kubelet[1733]: E0212 20:26:22.448954 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:22.449050 kubelet[1733]: E0212 20:26:22.448961 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:22.449412 env[1189]: time="2024-02-12T20:26:22.449386795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:22.449511 env[1189]: time="2024-02-12T20:26:22.449473700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:22.594830 kubelet[1733]: E0212 20:26:22.594709 1733 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.697111 kubelet[1733]: I0212 20:26:22.697072 1733 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:26:22.697462 kubelet[1733]: E0212 20:26:22.697317 1733 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 12 20:26:22.839466 kubelet[1733]: W0212 20:26:22.839402 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.839466 kubelet[1733]: E0212 20:26:22.839471 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.979289 kubelet[1733]: W0212 20:26:22.979158 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.979289 kubelet[1733]: E0212 20:26:22.979210 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:22.994837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299745154.mount: Deactivated successfully. Feb 12 20:26:23.000189 env[1189]: time="2024-02-12T20:26:23.000139767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.003828 env[1189]: time="2024-02-12T20:26:23.003774515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.004526 env[1189]: time="2024-02-12T20:26:23.004498311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.005272 env[1189]: time="2024-02-12T20:26:23.005224902Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.007647 env[1189]: time="2024-02-12T20:26:23.007609042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.008941 env[1189]: time="2024-02-12T20:26:23.008917738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.010122 env[1189]: time="2024-02-12T20:26:23.010073275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.012244 env[1189]: time="2024-02-12T20:26:23.012217720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.014478 env[1189]: time="2024-02-12T20:26:23.014415025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.015789 env[1189]: time="2024-02-12T20:26:23.015751044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.017401 env[1189]: time="2024-02-12T20:26:23.017360492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.018115 env[1189]: time="2024-02-12T20:26:23.018092864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:23.038631 env[1189]: time="2024-02-12T20:26:23.034748254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:23.038631 env[1189]: time="2024-02-12T20:26:23.034824589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:23.038631 env[1189]: time="2024-02-12T20:26:23.034845488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:23.040050 env[1189]: time="2024-02-12T20:26:23.038942454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccf08a9ab90221f77ad2532e2f572067bf27f4fe6ec0e69bfe85940d9538e7de pid=1811 runtime=io.containerd.runc.v2 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.043707860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.043746534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.043759969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.043893172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe9f568bd5b1504f69a99db9a54bb6e836909f461be029580cbd41e9185a15f1 pid=1841 runtime=io.containerd.runc.v2 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.042659267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.042690447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.042699814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:23.044233 env[1189]: time="2024-02-12T20:26:23.042849199Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53efff59b8b85d936b3c2a878702b4f752c8f6bfbbf5bf697a820afbe768dd94 pid=1834 runtime=io.containerd.runc.v2 Feb 12 20:26:23.069614 kubelet[1733]: W0212 20:26:23.069573 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:23.069614 kubelet[1733]: E0212 20:26:23.069625 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:23.087686 env[1189]: time="2024-02-12T20:26:23.087636657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccf08a9ab90221f77ad2532e2f572067bf27f4fe6ec0e69bfe85940d9538e7de\"" Feb 12 20:26:23.088621 kubelet[1733]: E0212 20:26:23.088591 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:23.098085 env[1189]: time="2024-02-12T20:26:23.098044444Z" level=info msg="CreateContainer within sandbox \"ccf08a9ab90221f77ad2532e2f572067bf27f4fe6ec0e69bfe85940d9538e7de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:26:23.099152 env[1189]: time="2024-02-12T20:26:23.099123184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"53efff59b8b85d936b3c2a878702b4f752c8f6bfbbf5bf697a820afbe768dd94\"" Feb 12 20:26:23.099554 kubelet[1733]: E0212 20:26:23.099527 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:23.101321 env[1189]: time="2024-02-12T20:26:23.101292928Z" level=info msg="CreateContainer within sandbox \"53efff59b8b85d936b3c2a878702b4f752c8f6bfbbf5bf697a820afbe768dd94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:26:23.103022 env[1189]: time="2024-02-12T20:26:23.101975605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e61bfceaf3bc6dadf1999fc1fa06c931,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe9f568bd5b1504f69a99db9a54bb6e836909f461be029580cbd41e9185a15f1\"" Feb 12 20:26:23.103433 kubelet[1733]: E0212 20:26:23.103346 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:23.105040 env[1189]: time="2024-02-12T20:26:23.105008738Z" level=info msg="CreateContainer within sandbox \"fe9f568bd5b1504f69a99db9a54bb6e836909f461be029580cbd41e9185a15f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:26:23.186653 kubelet[1733]: W0212 20:26:23.186581 1733 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:23.186653 kubelet[1733]: E0212 20:26:23.186646 1733 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 12 20:26:23.197669 env[1189]: time="2024-02-12T20:26:23.197598435Z" level=info msg="CreateContainer within sandbox \"ccf08a9ab90221f77ad2532e2f572067bf27f4fe6ec0e69bfe85940d9538e7de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8cc3e61d01f54e3c85c45b0490f482f70f0f80ab67ce65626c17b924eed80886\"" Feb 12 20:26:23.198267 env[1189]: time="2024-02-12T20:26:23.198241657Z" level=info msg="StartContainer for \"8cc3e61d01f54e3c85c45b0490f482f70f0f80ab67ce65626c17b924eed80886\"" Feb 12 20:26:23.221448 env[1189]: time="2024-02-12T20:26:23.221362393Z" level=info msg="CreateContainer within sandbox \"53efff59b8b85d936b3c2a878702b4f752c8f6bfbbf5bf697a820afbe768dd94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1316f7ebc81d381093dd3544b596a3bfd3ca0fafaefed9480f07701903268d28\"" Feb 12 20:26:23.222128 env[1189]: time="2024-02-12T20:26:23.222103461Z" level=info msg="StartContainer for \"1316f7ebc81d381093dd3544b596a3bfd3ca0fafaefed9480f07701903268d28\"" Feb 12 20:26:23.223165 env[1189]: time="2024-02-12T20:26:23.223124942Z" level=info msg="CreateContainer within sandbox \"fe9f568bd5b1504f69a99db9a54bb6e836909f461be029580cbd41e9185a15f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f9875b3fa1166e0a576f6c62e5605cdb449dad944ab5aeb00b047a5423426be\"" Feb 12 20:26:23.223527 env[1189]: time="2024-02-12T20:26:23.223416716Z" level=info msg="StartContainer for \"9f9875b3fa1166e0a576f6c62e5605cdb449dad944ab5aeb00b047a5423426be\"" Feb 12 20:26:23.252558 env[1189]: time="2024-02-12T20:26:23.250169464Z" level=info msg="StartContainer for \"8cc3e61d01f54e3c85c45b0490f482f70f0f80ab67ce65626c17b924eed80886\" returns successfully" Feb 12 20:26:23.289903 env[1189]: time="2024-02-12T20:26:23.289846421Z" level=info msg="StartContainer for \"1316f7ebc81d381093dd3544b596a3bfd3ca0fafaefed9480f07701903268d28\" returns successfully" Feb 12 20:26:23.298545 env[1189]: time="2024-02-12T20:26:23.298497401Z" level=info msg="StartContainer for \"9f9875b3fa1166e0a576f6c62e5605cdb449dad944ab5aeb00b047a5423426be\" returns successfully" Feb 12 20:26:23.498673 kubelet[1733]: I0212 20:26:23.498646 1733 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:26:23.740987 update_engine[1175]: I0212 20:26:23.740923 1175 update_attempter.cc:509] Updating boot flags... Feb 12 20:26:24.045741 kubelet[1733]: E0212 20:26:24.045400 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:24.047662 kubelet[1733]: E0212 20:26:24.047650 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:24.049205 kubelet[1733]: E0212 20:26:24.049194 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:24.745453 kubelet[1733]: E0212 20:26:24.745393 1733 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 20:26:24.833348 kubelet[1733]: I0212 20:26:24.833308 1733 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:26:24.840830 kubelet[1733]: E0212 20:26:24.840801 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:24.941341 kubelet[1733]: E0212 20:26:24.941299 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.042447 kubelet[1733]: E0212 20:26:25.042416 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.050360 kubelet[1733]: E0212 20:26:25.050344 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:25.050690 kubelet[1733]: E0212 20:26:25.050678 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:25.050742 kubelet[1733]: E0212 20:26:25.050719 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:25.142824 kubelet[1733]: E0212 20:26:25.142783 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.243661 kubelet[1733]: E0212 20:26:25.243606 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.344224 kubelet[1733]: E0212 20:26:25.344089 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.444562 kubelet[1733]: E0212 20:26:25.444505 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.545165 kubelet[1733]: E0212 20:26:25.545113 1733 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:26:25.988312 kubelet[1733]: I0212 20:26:25.988274 1733 apiserver.go:52] "Watching apiserver" Feb 12 20:26:26.192250 kubelet[1733]: I0212 20:26:26.192209 1733 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:26:26.216341 kubelet[1733]: I0212 20:26:26.216292 1733 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:26:26.392033 kubelet[1733]: E0212 20:26:26.391987 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:27.052314 kubelet[1733]: E0212 20:26:27.052276 1733 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:27.191582 systemd[1]: Reloading. Feb 12 20:26:27.247688 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2024-02-12T20:26:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:26:27.248109 /usr/lib/systemd/system-generators/torcx-generator[2079]: time="2024-02-12T20:26:27Z" level=info msg="torcx already run" Feb 12 20:26:27.308780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:26:27.308795 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:26:27.325388 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:26:27.407587 systemd[1]: Stopping kubelet.service... Feb 12 20:26:27.426601 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:26:27.426979 systemd[1]: Stopped kubelet.service. Feb 12 20:26:27.428943 systemd[1]: Started kubelet.service. Feb 12 20:26:27.489498 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:27.489498 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:27.489905 kubelet[2127]: I0212 20:26:27.489540 2127 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:26:27.491273 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:26:27.491273 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:26:27.494041 kubelet[2127]: I0212 20:26:27.493991 2127 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:26:27.494041 kubelet[2127]: I0212 20:26:27.494017 2127 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:26:27.494323 kubelet[2127]: I0212 20:26:27.494271 2127 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:26:27.495483 kubelet[2127]: I0212 20:26:27.495462 2127 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:26:27.496249 kubelet[2127]: I0212 20:26:27.496216 2127 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:26:27.505274 kubelet[2127]: I0212 20:26:27.505230 2127 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:26:27.505605 kubelet[2127]: I0212 20:26:27.505586 2127 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:26:27.505671 kubelet[2127]: I0212 20:26:27.505658 2127 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:26:27.505784 kubelet[2127]: I0212 20:26:27.505678 2127 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:26:27.505784 kubelet[2127]: I0212 20:26:27.505690 2127 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:26:27.505784 kubelet[2127]: I0212 20:26:27.505723 2127 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:27.508868 kubelet[2127]: I0212 20:26:27.508836 2127 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:26:27.508868 kubelet[2127]: I0212 20:26:27.508867 2127 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:26:27.508955 kubelet[2127]: I0212 20:26:27.508889 2127 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:26:27.508955 kubelet[2127]: I0212 20:26:27.508903 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:26:27.516378 kubelet[2127]: I0212 20:26:27.516351 2127 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:26:27.516898 kubelet[2127]: I0212 20:26:27.516860 2127 server.go:1186] "Started kubelet" Feb 12 20:26:27.517144 kubelet[2127]: I0212 20:26:27.517128 2127 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:26:27.517916 kubelet[2127]: I0212 20:26:27.517899 2127 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:26:27.518573 kubelet[2127]: I0212 20:26:27.518542 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:26:27.526415 kubelet[2127]: I0212 20:26:27.526391 2127 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:26:27.526687 kubelet[2127]: I0212 20:26:27.526671 2127 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:26:27.529622 kubelet[2127]: E0212 20:26:27.529607 2127 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:26:27.529727 kubelet[2127]: E0212 20:26:27.529714 2127 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:26:27.555197 kubelet[2127]: I0212 20:26:27.555162 2127 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:26:27.571608 kubelet[2127]: I0212 20:26:27.571265 2127 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:26:27.571608 kubelet[2127]: I0212 20:26:27.571299 2127 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:26:27.571608 kubelet[2127]: I0212 20:26:27.571319 2127 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:26:27.571608 kubelet[2127]: E0212 20:26:27.571380 2127 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:26:27.623444 sudo[2180]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:26:27.623657 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:26:27.635712 kubelet[2127]: I0212 20:26:27.635679 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:26:27.635712 kubelet[2127]: I0212 20:26:27.635699 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:26:27.635712 kubelet[2127]: I0212 20:26:27.635713 2127 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:26:27.635941 kubelet[2127]: I0212 20:26:27.635857 2127 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:26:27.635941 kubelet[2127]: I0212 20:26:27.635870 2127 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:26:27.635941 kubelet[2127]: I0212 20:26:27.635875 2127 policy_none.go:49] "None policy: Start" Feb 12 20:26:27.636331 kubelet[2127]: I0212 20:26:27.636312 2127 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:26:27.636331 kubelet[2127]: I0212 20:26:27.636331 2127 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:26:27.636456 kubelet[2127]: I0212 20:26:27.636437 2127 state_mem.go:75] "Updated machine memory state" Feb 12 20:26:27.644941 kubelet[2127]: I0212 20:26:27.644903 2127 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:26:27.645147 kubelet[2127]: I0212 20:26:27.645126 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:26:27.652741 kubelet[2127]: I0212 20:26:27.652705 2127 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:26:27.662228 kubelet[2127]: I0212 20:26:27.662199 2127 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 20:26:27.662396 kubelet[2127]: I0212 20:26:27.662260 2127 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:26:27.672223 kubelet[2127]: I0212 20:26:27.672187 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:27.672522 kubelet[2127]: I0212 20:26:27.672267 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:27.672522 kubelet[2127]: I0212 20:26:27.672294 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:27.827954 kubelet[2127]: I0212 20:26:27.827838 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:27.827954 kubelet[2127]: I0212 20:26:27.827905 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:27.828114 kubelet[2127]: I0212 20:26:27.827988 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:27.828114 kubelet[2127]: I0212 20:26:27.828029 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e61bfceaf3bc6dadf1999fc1fa06c931-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e61bfceaf3bc6dadf1999fc1fa06c931\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:27.828114 kubelet[2127]: I0212 20:26:27.828063 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:27.828114 kubelet[2127]: I0212 20:26:27.828101 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:27.828227 kubelet[2127]: I0212 20:26:27.828133 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:27.828227 kubelet[2127]: I0212 20:26:27.828161 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:27.828227 kubelet[2127]: I0212 20:26:27.828210 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:26:27.914877 kubelet[2127]: E0212 20:26:27.914836 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:27.978821 kubelet[2127]: E0212 20:26:27.978780 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:28.014534 kubelet[2127]: E0212 20:26:28.014496 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:28.115301 sudo[2180]: pam_unix(sudo:session): session closed for user root Feb 12 20:26:28.216507 kubelet[2127]: E0212 20:26:28.216362 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:28.516549 kubelet[2127]: I0212 20:26:28.516425 2127 apiserver.go:52] "Watching apiserver" Feb 12 20:26:28.527188 kubelet[2127]: I0212 20:26:28.527151 2127 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:26:28.533304 kubelet[2127]: I0212 20:26:28.533278 2127 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:26:28.938806 kubelet[2127]: E0212 20:26:28.938776 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:26:28.939292 kubelet[2127]: E0212 20:26:28.939275 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.113824 kubelet[2127]: E0212 20:26:29.113789 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 20:26:29.114097 kubelet[2127]: E0212 20:26:29.114075 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.209445 sudo[1308]: pam_unix(sudo:session): session closed for user root Feb 12 20:26:29.210711 sshd[1302]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:29.212798 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:50248.service: Deactivated successfully. Feb 12 20:26:29.213892 systemd-logind[1172]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:26:29.213909 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:26:29.214792 systemd-logind[1172]: Removed session 5. Feb 12 20:26:29.368420 kubelet[2127]: E0212 20:26:29.368374 2127 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 12 20:26:29.368846 kubelet[2127]: E0212 20:26:29.368821 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.599075 kubelet[2127]: E0212 20:26:29.599046 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.599563 kubelet[2127]: E0212 20:26:29.599128 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.599563 kubelet[2127]: E0212 20:26:29.599214 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:29.915614 kubelet[2127]: I0212 20:26:29.915395 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.915308293 pod.CreationTimestamp="2024-02-12 20:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:29.519075116 +0000 UTC m=+2.085530988" watchObservedRunningTime="2024-02-12 20:26:29.915308293 +0000 UTC m=+2.481764186" Feb 12 20:26:29.915614 kubelet[2127]: I0212 20:26:29.915542 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.915518271 pod.CreationTimestamp="2024-02-12 20:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:29.91507927 +0000 UTC m=+2.481535152" watchObservedRunningTime="2024-02-12 20:26:29.915518271 +0000 UTC m=+2.481974153" Feb 12 20:26:30.600474 kubelet[2127]: E0212 20:26:30.600444 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:33.516122 kubelet[2127]: E0212 20:26:33.516085 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:33.542509 kubelet[2127]: I0212 20:26:33.542471 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.542431611 pod.CreationTimestamp="2024-02-12 20:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:30.314366592 +0000 UTC m=+2.880822504" watchObservedRunningTime="2024-02-12 20:26:33.542431611 +0000 UTC m=+6.108887513" Feb 12 20:26:33.604947 kubelet[2127]: E0212 20:26:33.604910 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:34.605957 kubelet[2127]: E0212 20:26:34.605933 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:36.601030 kubelet[2127]: E0212 20:26:36.601003 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:36.608840 kubelet[2127]: E0212 20:26:36.608810 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:38.442643 kubelet[2127]: E0212 20:26:38.442611 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:38.611109 kubelet[2127]: E0212 20:26:38.611074 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:40.721922 kubelet[2127]: I0212 20:26:40.721886 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:40.724000 kubelet[2127]: I0212 20:26:40.723989 2127 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:26:40.724415 env[1189]: time="2024-02-12T20:26:40.724330045Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:26:40.724779 kubelet[2127]: I0212 20:26:40.724764 2127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:26:40.731944 kubelet[2127]: W0212 20:26:40.731907 2127 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:26:40.731944 kubelet[2127]: E0212 20:26:40.731937 2127 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:26:40.732149 kubelet[2127]: W0212 20:26:40.731965 2127 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:26:40.732149 kubelet[2127]: E0212 20:26:40.731973 2127 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:26:40.923370 kubelet[2127]: I0212 20:26:40.923324 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48ab623e-d593-463a-ae71-b59e46f49269-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-c6htr\" (UID: \"48ab623e-d593-463a-ae71-b59e46f49269\") " pod="kube-system/cilium-operator-f59cbd8c6-c6htr" Feb 12 20:26:40.923567 kubelet[2127]: I0212 20:26:40.923385 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkm7p\" (UniqueName: \"kubernetes.io/projected/48ab623e-d593-463a-ae71-b59e46f49269-kube-api-access-pkm7p\") pod \"cilium-operator-f59cbd8c6-c6htr\" (UID: \"48ab623e-d593-463a-ae71-b59e46f49269\") " pod="kube-system/cilium-operator-f59cbd8c6-c6htr" Feb 12 20:26:41.041047 kubelet[2127]: I0212 20:26:41.041016 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:41.045526 kubelet[2127]: I0212 20:26:41.045491 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:41.124667 kubelet[2127]: I0212 20:26:41.124626 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e0a1355-2448-4f82-bfe4-0a49ef11bae5-xtables-lock\") pod \"kube-proxy-n298n\" (UID: \"0e0a1355-2448-4f82-bfe4-0a49ef11bae5\") " pod="kube-system/kube-proxy-n298n" Feb 12 20:26:41.124667 kubelet[2127]: I0212 20:26:41.124677 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lksqj\" (UniqueName: \"kubernetes.io/projected/0e0a1355-2448-4f82-bfe4-0a49ef11bae5-kube-api-access-lksqj\") pod \"kube-proxy-n298n\" (UID: \"0e0a1355-2448-4f82-bfe4-0a49ef11bae5\") " pod="kube-system/kube-proxy-n298n" Feb 12 20:26:41.124889 kubelet[2127]: I0212 20:26:41.124703 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-run\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.124889 kubelet[2127]: I0212 20:26:41.124728 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e0a1355-2448-4f82-bfe4-0a49ef11bae5-lib-modules\") pod \"kube-proxy-n298n\" (UID: \"0e0a1355-2448-4f82-bfe4-0a49ef11bae5\") " pod="kube-system/kube-proxy-n298n" Feb 12 20:26:41.124889 kubelet[2127]: I0212 20:26:41.124773 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e0a1355-2448-4f82-bfe4-0a49ef11bae5-kube-proxy\") pod \"kube-proxy-n298n\" (UID: \"0e0a1355-2448-4f82-bfe4-0a49ef11bae5\") " pod="kube-system/kube-proxy-n298n" Feb 12 20:26:41.124889 kubelet[2127]: I0212 20:26:41.124814 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-bpf-maps\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225754 kubelet[2127]: I0212 20:26:41.225724 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bc5992b-016d-441e-9441-699452c72f58-clustermesh-secrets\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225772 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-xtables-lock\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225790 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-net\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225820 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-lib-modules\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225840 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-hubble-tls\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225859 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-etc-cni-netd\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.225994 kubelet[2127]: I0212 20:26:41.225878 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-kernel\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.226194 kubelet[2127]: I0212 20:26:41.225908 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-cgroup\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.226194 kubelet[2127]: I0212 20:26:41.225927 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bc5992b-016d-441e-9441-699452c72f58-cilium-config-path\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.226194 kubelet[2127]: I0212 20:26:41.225978 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-hostproc\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.226194 kubelet[2127]: I0212 20:26:41.226018 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cni-path\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.226194 kubelet[2127]: I0212 20:26:41.226045 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbpc\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-kube-api-access-xzbpc\") pod \"cilium-9w8hd\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " pod="kube-system/cilium-9w8hd" Feb 12 20:26:41.943986 kubelet[2127]: E0212 20:26:41.943947 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:41.944723 env[1189]: time="2024-02-12T20:26:41.944679155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n298n,Uid:0e0a1355-2448-4f82-bfe4-0a49ef11bae5,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:41.947862 kubelet[2127]: E0212 20:26:41.947840 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:41.948334 env[1189]: time="2024-02-12T20:26:41.948184306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9w8hd,Uid:7bc5992b-016d-441e-9441-699452c72f58,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:41.968839 env[1189]: time="2024-02-12T20:26:41.968780568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:41.968839 env[1189]: time="2024-02-12T20:26:41.968818338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:41.968839 env[1189]: time="2024-02-12T20:26:41.968828387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:41.969046 env[1189]: time="2024-02-12T20:26:41.968958843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97 pid=2244 runtime=io.containerd.runc.v2 Feb 12 20:26:41.973480 env[1189]: time="2024-02-12T20:26:41.973423452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:41.973606 env[1189]: time="2024-02-12T20:26:41.973566251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:41.973684 env[1189]: time="2024-02-12T20:26:41.973588563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:41.974368 env[1189]: time="2024-02-12T20:26:41.974333857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9456a8fee257cf802a06c8250be8438a0b43b035f2bb13bb488d3df59f8fb4b8 pid=2266 runtime=io.containerd.runc.v2 Feb 12 20:26:42.003272 env[1189]: time="2024-02-12T20:26:42.003224187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9w8hd,Uid:7bc5992b-016d-441e-9441-699452c72f58,Namespace:kube-system,Attempt:0,} returns sandbox id \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\"" Feb 12 20:26:42.003848 kubelet[2127]: E0212 20:26:42.003825 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:42.004967 env[1189]: time="2024-02-12T20:26:42.004942483Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:26:42.006142 env[1189]: time="2024-02-12T20:26:42.006103349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n298n,Uid:0e0a1355-2448-4f82-bfe4-0a49ef11bae5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9456a8fee257cf802a06c8250be8438a0b43b035f2bb13bb488d3df59f8fb4b8\"" Feb 12 20:26:42.006532 kubelet[2127]: E0212 20:26:42.006515 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:42.009587 env[1189]: time="2024-02-12T20:26:42.008400093Z" level=info msg="CreateContainer within sandbox \"9456a8fee257cf802a06c8250be8438a0b43b035f2bb13bb488d3df59f8fb4b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:26:42.230141 kubelet[2127]: E0212 20:26:42.229656 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:42.230393 env[1189]: time="2024-02-12T20:26:42.230352652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-c6htr,Uid:48ab623e-d593-463a-ae71-b59e46f49269,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:42.698920 env[1189]: time="2024-02-12T20:26:42.698849866Z" level=info msg="CreateContainer within sandbox \"9456a8fee257cf802a06c8250be8438a0b43b035f2bb13bb488d3df59f8fb4b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d61c31ab36ca14ca4f651c1be782c6b42ac48d2aefeddba692cabd11b22ad5b8\"" Feb 12 20:26:42.701125 env[1189]: time="2024-02-12T20:26:42.700504682Z" level=info msg="StartContainer for \"d61c31ab36ca14ca4f651c1be782c6b42ac48d2aefeddba692cabd11b22ad5b8\"" Feb 12 20:26:42.711498 env[1189]: time="2024-02-12T20:26:42.711360503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:42.711498 env[1189]: time="2024-02-12T20:26:42.711392633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:42.711498 env[1189]: time="2024-02-12T20:26:42.711401751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:42.711738 env[1189]: time="2024-02-12T20:26:42.711595155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0 pid=2342 runtime=io.containerd.runc.v2 Feb 12 20:26:42.751463 env[1189]: time="2024-02-12T20:26:42.751424991Z" level=info msg="StartContainer for \"d61c31ab36ca14ca4f651c1be782c6b42ac48d2aefeddba692cabd11b22ad5b8\" returns successfully" Feb 12 20:26:42.762699 env[1189]: time="2024-02-12T20:26:42.762664464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-c6htr,Uid:48ab623e-d593-463a-ae71-b59e46f49269,Namespace:kube-system,Attempt:0,} returns sandbox id \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\"" Feb 12 20:26:42.765758 kubelet[2127]: E0212 20:26:42.765632 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:43.632069 kubelet[2127]: E0212 20:26:43.632033 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:43.641285 kubelet[2127]: I0212 20:26:43.640847 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n298n" podStartSLOduration=2.640819873 pod.CreationTimestamp="2024-02-12 20:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:43.640367912 +0000 UTC m=+16.206823795" watchObservedRunningTime="2024-02-12 20:26:43.640819873 +0000 UTC m=+16.207275745" Feb 12 20:26:44.636537 kubelet[2127]: E0212 20:26:44.636505 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:47.044697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418913705.mount: Deactivated successfully. Feb 12 20:26:51.323541 env[1189]: time="2024-02-12T20:26:51.323473940Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:51.325066 env[1189]: time="2024-02-12T20:26:51.325006611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:51.326891 env[1189]: time="2024-02-12T20:26:51.326831090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:51.327507 env[1189]: time="2024-02-12T20:26:51.327479399Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:26:51.328252 env[1189]: time="2024-02-12T20:26:51.328221163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:26:51.330450 env[1189]: time="2024-02-12T20:26:51.330380032Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:26:51.342886 env[1189]: time="2024-02-12T20:26:51.342841947Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\"" Feb 12 20:26:51.343319 env[1189]: time="2024-02-12T20:26:51.343290781Z" level=info msg="StartContainer for \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\"" Feb 12 20:26:51.379649 env[1189]: time="2024-02-12T20:26:51.379597558Z" level=info msg="StartContainer for \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\" returns successfully" Feb 12 20:26:51.649446 kubelet[2127]: E0212 20:26:51.648438 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:51.905217 env[1189]: time="2024-02-12T20:26:51.905092506Z" level=info msg="shim disconnected" id=b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518 Feb 12 20:26:51.905217 env[1189]: time="2024-02-12T20:26:51.905138202Z" level=warning msg="cleaning up after shim disconnected" id=b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518 namespace=k8s.io Feb 12 20:26:51.905217 env[1189]: time="2024-02-12T20:26:51.905147990Z" level=info msg="cleaning up dead shim" Feb 12 20:26:51.910988 env[1189]: time="2024-02-12T20:26:51.910944475Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2557 runtime=io.containerd.runc.v2\n" Feb 12 20:26:52.340787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518-rootfs.mount: Deactivated successfully. Feb 12 20:26:52.651372 kubelet[2127]: E0212 20:26:52.651212 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:52.653930 env[1189]: time="2024-02-12T20:26:52.653891433Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:26:52.721405 env[1189]: time="2024-02-12T20:26:52.721349319Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\"" Feb 12 20:26:52.721956 env[1189]: time="2024-02-12T20:26:52.721921184Z" level=info msg="StartContainer for \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\"" Feb 12 20:26:52.760018 env[1189]: time="2024-02-12T20:26:52.759960729Z" level=info msg="StartContainer for \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\" returns successfully" Feb 12 20:26:52.770998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:26:52.771602 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:26:52.771806 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:26:52.773773 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:26:52.782777 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:26:52.796684 env[1189]: time="2024-02-12T20:26:52.796625631Z" level=info msg="shim disconnected" id=075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962 Feb 12 20:26:52.796841 env[1189]: time="2024-02-12T20:26:52.796697096Z" level=warning msg="cleaning up after shim disconnected" id=075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962 namespace=k8s.io Feb 12 20:26:52.796841 env[1189]: time="2024-02-12T20:26:52.796711413Z" level=info msg="cleaning up dead shim" Feb 12 20:26:52.803727 env[1189]: time="2024-02-12T20:26:52.803678657Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2620 runtime=io.containerd.runc.v2\n" Feb 12 20:26:53.340039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962-rootfs.mount: Deactivated successfully. Feb 12 20:26:53.654115 kubelet[2127]: E0212 20:26:53.653976 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:53.656347 env[1189]: time="2024-02-12T20:26:53.656308525Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:26:53.850668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641007093.mount: Deactivated successfully. Feb 12 20:26:53.861938 env[1189]: time="2024-02-12T20:26:53.861890918Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\"" Feb 12 20:26:53.863943 env[1189]: time="2024-02-12T20:26:53.862741757Z" level=info msg="StartContainer for \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\"" Feb 12 20:26:53.904778 env[1189]: time="2024-02-12T20:26:53.904680741Z" level=info msg="StartContainer for \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\" returns successfully" Feb 12 20:26:53.934212 env[1189]: time="2024-02-12T20:26:53.934138840Z" level=info msg="shim disconnected" id=020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0 Feb 12 20:26:53.934212 env[1189]: time="2024-02-12T20:26:53.934207539Z" level=warning msg="cleaning up after shim disconnected" id=020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0 namespace=k8s.io Feb 12 20:26:53.934212 env[1189]: time="2024-02-12T20:26:53.934220713Z" level=info msg="cleaning up dead shim" Feb 12 20:26:53.940536 env[1189]: time="2024-02-12T20:26:53.940509242Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2676 runtime=io.containerd.runc.v2\n" Feb 12 20:26:54.582035 env[1189]: time="2024-02-12T20:26:54.581981574Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:54.583810 env[1189]: time="2024-02-12T20:26:54.583762892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:54.585121 env[1189]: time="2024-02-12T20:26:54.585092159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:54.585712 env[1189]: time="2024-02-12T20:26:54.585676297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:26:54.587167 env[1189]: time="2024-02-12T20:26:54.587132643Z" level=info msg="CreateContainer within sandbox \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:26:54.596714 env[1189]: time="2024-02-12T20:26:54.596681476Z" level=info msg="CreateContainer within sandbox \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\"" Feb 12 20:26:54.597185 env[1189]: time="2024-02-12T20:26:54.597136300Z" level=info msg="StartContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\"" Feb 12 20:26:54.631096 env[1189]: time="2024-02-12T20:26:54.631050496Z" level=info msg="StartContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" returns successfully" Feb 12 20:26:54.659514 kubelet[2127]: E0212 20:26:54.657396 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:54.659514 kubelet[2127]: E0212 20:26:54.658463 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:54.660454 env[1189]: time="2024-02-12T20:26:54.660419873Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:26:54.676206 kubelet[2127]: I0212 20:26:54.676154 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-c6htr" podStartSLOduration=-9.223372022178665e+09 pod.CreationTimestamp="2024-02-12 20:26:40 +0000 UTC" firstStartedPulling="2024-02-12 20:26:42.766219328 +0000 UTC m=+15.332675210" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:54.675868822 +0000 UTC m=+27.242324704" watchObservedRunningTime="2024-02-12 20:26:54.676110426 +0000 UTC m=+27.242566308" Feb 12 20:26:54.684501 env[1189]: time="2024-02-12T20:26:54.684456689Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\"" Feb 12 20:26:54.685435 env[1189]: time="2024-02-12T20:26:54.685398659Z" level=info msg="StartContainer for \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\"" Feb 12 20:26:54.743356 env[1189]: time="2024-02-12T20:26:54.743307131Z" level=info msg="StartContainer for \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\" returns successfully" Feb 12 20:26:54.978117 env[1189]: time="2024-02-12T20:26:54.977981633Z" level=info msg="shim disconnected" id=1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08 Feb 12 20:26:54.978117 env[1189]: time="2024-02-12T20:26:54.978026697Z" level=warning msg="cleaning up after shim disconnected" id=1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08 namespace=k8s.io Feb 12 20:26:54.978117 env[1189]: time="2024-02-12T20:26:54.978035574Z" level=info msg="cleaning up dead shim" Feb 12 20:26:54.988704 env[1189]: time="2024-02-12T20:26:54.988649177Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2762 runtime=io.containerd.runc.v2\n" Feb 12 20:26:55.661412 kubelet[2127]: E0212 20:26:55.661389 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:55.661776 kubelet[2127]: E0212 20:26:55.661563 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:55.663311 env[1189]: time="2024-02-12T20:26:55.663278854Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:26:56.164937 env[1189]: time="2024-02-12T20:26:56.164849398Z" level=info msg="CreateContainer within sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\"" Feb 12 20:26:56.165336 env[1189]: time="2024-02-12T20:26:56.165305796Z" level=info msg="StartContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\"" Feb 12 20:26:56.201369 env[1189]: time="2024-02-12T20:26:56.201323489Z" level=info msg="StartContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" returns successfully" Feb 12 20:26:56.332340 kubelet[2127]: I0212 20:26:56.332306 2127 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:26:56.524733 kubelet[2127]: I0212 20:26:56.524603 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:56.526012 kubelet[2127]: I0212 20:26:56.525975 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:56.629842 kubelet[2127]: I0212 20:26:56.629806 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70ac9896-e85c-4299-8901-d0169eb002e6-config-volume\") pod \"coredns-787d4945fb-h5v5v\" (UID: \"70ac9896-e85c-4299-8901-d0169eb002e6\") " pod="kube-system/coredns-787d4945fb-h5v5v" Feb 12 20:26:56.630088 kubelet[2127]: I0212 20:26:56.630074 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/874aab7f-f00d-42a0-a547-191f2081e494-config-volume\") pod \"coredns-787d4945fb-c2zt5\" (UID: \"874aab7f-f00d-42a0-a547-191f2081e494\") " pod="kube-system/coredns-787d4945fb-c2zt5" Feb 12 20:26:56.630285 kubelet[2127]: I0212 20:26:56.630270 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcn8f\" (UniqueName: \"kubernetes.io/projected/70ac9896-e85c-4299-8901-d0169eb002e6-kube-api-access-vcn8f\") pod \"coredns-787d4945fb-h5v5v\" (UID: \"70ac9896-e85c-4299-8901-d0169eb002e6\") " pod="kube-system/coredns-787d4945fb-h5v5v" Feb 12 20:26:56.630420 kubelet[2127]: I0212 20:26:56.630396 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5x9\" (UniqueName: \"kubernetes.io/projected/874aab7f-f00d-42a0-a547-191f2081e494-kube-api-access-5w5x9\") pod \"coredns-787d4945fb-c2zt5\" (UID: \"874aab7f-f00d-42a0-a547-191f2081e494\") " pod="kube-system/coredns-787d4945fb-c2zt5" Feb 12 20:26:56.665027 kubelet[2127]: E0212 20:26:56.665003 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:56.828412 kubelet[2127]: E0212 20:26:56.828381 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:56.829058 env[1189]: time="2024-02-12T20:26:56.829018247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-c2zt5,Uid:874aab7f-f00d-42a0-a547-191f2081e494,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:56.829403 kubelet[2127]: E0212 20:26:56.829246 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:56.829692 env[1189]: time="2024-02-12T20:26:56.829655524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-h5v5v,Uid:70ac9896-e85c-4299-8901-d0169eb002e6,Namespace:kube-system,Attempt:0,}" Feb 12 20:26:56.838430 kubelet[2127]: I0212 20:26:56.838391 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9w8hd" podStartSLOduration=-9.223372021016428e+09 pod.CreationTimestamp="2024-02-12 20:26:41 +0000 UTC" firstStartedPulling="2024-02-12 20:26:42.004587585 +0000 UTC m=+14.571043467" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:56.837556377 +0000 UTC m=+29.404012269" watchObservedRunningTime="2024-02-12 20:26:56.838347354 +0000 UTC m=+29.404803236" Feb 12 20:26:57.665944 kubelet[2127]: E0212 20:26:57.665907 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:58.135612 systemd-networkd[1084]: cilium_host: Link UP Feb 12 20:26:58.135725 systemd-networkd[1084]: cilium_net: Link UP Feb 12 20:26:58.136414 systemd-networkd[1084]: cilium_net: Gained carrier Feb 12 20:26:58.136920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:26:58.136965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:26:58.137044 systemd-networkd[1084]: cilium_host: Gained carrier Feb 12 20:26:58.206498 systemd-networkd[1084]: cilium_vxlan: Link UP Feb 12 20:26:58.206509 systemd-networkd[1084]: cilium_vxlan: Gained carrier Feb 12 20:26:58.217314 systemd-networkd[1084]: cilium_net: Gained IPv6LL Feb 12 20:26:58.403207 kernel: NET: Registered PF_ALG protocol family Feb 12 20:26:58.529323 systemd-networkd[1084]: cilium_host: Gained IPv6LL Feb 12 20:26:58.667823 kubelet[2127]: E0212 20:26:58.667800 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:26:58.894570 systemd-networkd[1084]: lxc_health: Link UP Feb 12 20:26:58.906986 systemd-networkd[1084]: lxc_health: Gained carrier Feb 12 20:26:58.907223 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:26:59.109724 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:40830.service. Feb 12 20:26:59.140830 sshd[3283]: Accepted publickey for core from 10.0.0.1 port 40830 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:26:59.141886 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:59.145484 systemd-logind[1172]: New session 6 of user core. Feb 12 20:26:59.146485 systemd[1]: Started session-6.scope. Feb 12 20:26:59.272956 sshd[3283]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:59.274983 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:40830.service: Deactivated successfully. Feb 12 20:26:59.275936 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:26:59.276352 systemd-logind[1172]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:26:59.277487 systemd-logind[1172]: Removed session 6. Feb 12 20:26:59.439319 systemd-networkd[1084]: lxcf1ab65a893fe: Link UP Feb 12 20:26:59.442454 systemd-networkd[1084]: lxcd5abedceb284: Link UP Feb 12 20:26:59.459203 kernel: eth0: renamed from tmpf01e5 Feb 12 20:26:59.466200 kernel: eth0: renamed from tmpae2c8 Feb 12 20:26:59.471976 systemd-networkd[1084]: lxcd5abedceb284: Gained carrier Feb 12 20:26:59.472254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd5abedceb284: link becomes ready Feb 12 20:26:59.474305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:59.474398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf1ab65a893fe: link becomes ready Feb 12 20:26:59.474505 systemd-networkd[1084]: lxcf1ab65a893fe: Gained carrier Feb 12 20:26:59.669496 kubelet[2127]: E0212 20:26:59.669463 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:00.056310 systemd-networkd[1084]: cilium_vxlan: Gained IPv6LL Feb 12 20:27:00.625301 systemd-networkd[1084]: lxcf1ab65a893fe: Gained IPv6LL Feb 12 20:27:00.670498 kubelet[2127]: E0212 20:27:00.670474 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:00.881296 systemd-networkd[1084]: lxc_health: Gained IPv6LL Feb 12 20:27:01.329300 systemd-networkd[1084]: lxcd5abedceb284: Gained IPv6LL Feb 12 20:27:01.671907 kubelet[2127]: E0212 20:27:01.671799 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:02.730188 env[1189]: time="2024-02-12T20:27:02.730095638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:02.730188 env[1189]: time="2024-02-12T20:27:02.730137487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:02.730188 env[1189]: time="2024-02-12T20:27:02.730147575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:02.730572 env[1189]: time="2024-02-12T20:27:02.730301975Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae2c8c3faf0fb5ba9ab6d33584a4d8cc71a62f99af52384ef99d1dd57eb3c6c0 pid=3340 runtime=io.containerd.runc.v2 Feb 12 20:27:02.742258 systemd[1]: run-containerd-runc-k8s.io-ae2c8c3faf0fb5ba9ab6d33584a4d8cc71a62f99af52384ef99d1dd57eb3c6c0-runc.SmkHjg.mount: Deactivated successfully. Feb 12 20:27:02.751099 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:27:02.773848 env[1189]: time="2024-02-12T20:27:02.773782098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:02.773998 env[1189]: time="2024-02-12T20:27:02.773857019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:02.773998 env[1189]: time="2024-02-12T20:27:02.773878649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:02.774237 env[1189]: time="2024-02-12T20:27:02.774204782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-h5v5v,Uid:70ac9896-e85c-4299-8901-d0169eb002e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae2c8c3faf0fb5ba9ab6d33584a4d8cc71a62f99af52384ef99d1dd57eb3c6c0\"" Feb 12 20:27:02.774323 env[1189]: time="2024-02-12T20:27:02.774228146Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f01e520b0c8cc1a3a43e867dc0c26cc0fcfb6ff3a9b1ba09267633e3ca9c1a87 pid=3382 runtime=io.containerd.runc.v2 Feb 12 20:27:02.774904 kubelet[2127]: E0212 20:27:02.774888 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:02.779971 env[1189]: time="2024-02-12T20:27:02.779933919Z" level=info msg="CreateContainer within sandbox \"ae2c8c3faf0fb5ba9ab6d33584a4d8cc71a62f99af52384ef99d1dd57eb3c6c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:27:02.797877 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:27:02.820457 env[1189]: time="2024-02-12T20:27:02.820418837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-c2zt5,Uid:874aab7f-f00d-42a0-a547-191f2081e494,Namespace:kube-system,Attempt:0,} returns sandbox id \"f01e520b0c8cc1a3a43e867dc0c26cc0fcfb6ff3a9b1ba09267633e3ca9c1a87\"" Feb 12 20:27:02.821033 kubelet[2127]: E0212 20:27:02.821011 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:02.822584 env[1189]: time="2024-02-12T20:27:02.822550891Z" level=info msg="CreateContainer within sandbox \"f01e520b0c8cc1a3a43e867dc0c26cc0fcfb6ff3a9b1ba09267633e3ca9c1a87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:27:02.881281 env[1189]: time="2024-02-12T20:27:02.881207460Z" level=info msg="CreateContainer within sandbox \"f01e520b0c8cc1a3a43e867dc0c26cc0fcfb6ff3a9b1ba09267633e3ca9c1a87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a89ff4591c31e33d054b421ca63173680caa403a42ceec1f83a472cb33c396cd\"" Feb 12 20:27:02.881736 env[1189]: time="2024-02-12T20:27:02.881708711Z" level=info msg="StartContainer for \"a89ff4591c31e33d054b421ca63173680caa403a42ceec1f83a472cb33c396cd\"" Feb 12 20:27:02.883119 env[1189]: time="2024-02-12T20:27:02.883080868Z" level=info msg="CreateContainer within sandbox \"ae2c8c3faf0fb5ba9ab6d33584a4d8cc71a62f99af52384ef99d1dd57eb3c6c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad9271d782a1d243053c71d978609facc953405884cfb664af3f7978901fc03e\"" Feb 12 20:27:02.883574 env[1189]: time="2024-02-12T20:27:02.883548095Z" level=info msg="StartContainer for \"ad9271d782a1d243053c71d978609facc953405884cfb664af3f7978901fc03e\"" Feb 12 20:27:02.934184 env[1189]: time="2024-02-12T20:27:02.934122478Z" level=info msg="StartContainer for \"ad9271d782a1d243053c71d978609facc953405884cfb664af3f7978901fc03e\" returns successfully" Feb 12 20:27:02.943134 env[1189]: time="2024-02-12T20:27:02.943035444Z" level=info msg="StartContainer for \"a89ff4591c31e33d054b421ca63173680caa403a42ceec1f83a472cb33c396cd\" returns successfully" Feb 12 20:27:03.675701 kubelet[2127]: E0212 20:27:03.675678 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:03.677331 kubelet[2127]: E0212 20:27:03.677289 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:03.688543 kubelet[2127]: I0212 20:27:03.688474 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-h5v5v" podStartSLOduration=23.688429102 pod.CreationTimestamp="2024-02-12 20:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:03.688189121 +0000 UTC m=+36.254645003" watchObservedRunningTime="2024-02-12 20:27:03.688429102 +0000 UTC m=+36.254884984" Feb 12 20:27:03.705092 kubelet[2127]: I0212 20:27:03.705066 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-c2zt5" podStartSLOduration=23.705021796 pod.CreationTimestamp="2024-02-12 20:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:03.704336629 +0000 UTC m=+36.270792501" watchObservedRunningTime="2024-02-12 20:27:03.705021796 +0000 UTC m=+36.271477678" Feb 12 20:27:04.275857 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:40842.service. Feb 12 20:27:04.305901 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 40842 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:04.306858 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:04.310061 systemd-logind[1172]: New session 7 of user core. Feb 12 20:27:04.311101 systemd[1]: Started session-7.scope. Feb 12 20:27:04.413950 sshd[3544]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:04.415874 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:40842.service: Deactivated successfully. Feb 12 20:27:04.416831 systemd-logind[1172]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:27:04.416849 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:27:04.417652 systemd-logind[1172]: Removed session 7. Feb 12 20:27:04.679018 kubelet[2127]: E0212 20:27:04.678993 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:04.679018 kubelet[2127]: E0212 20:27:04.679016 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:05.680598 kubelet[2127]: E0212 20:27:05.680568 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:05.681031 kubelet[2127]: E0212 20:27:05.680744 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:09.418034 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:40580.service. Feb 12 20:27:09.447518 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 40580 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:09.448513 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:09.451651 systemd-logind[1172]: New session 8 of user core. Feb 12 20:27:09.452407 systemd[1]: Started session-8.scope. Feb 12 20:27:09.560328 sshd[3559]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:09.562303 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:40580.service: Deactivated successfully. Feb 12 20:27:09.563430 systemd-logind[1172]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:27:09.563476 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:27:09.564317 systemd-logind[1172]: Removed session 8. Feb 12 20:27:14.563228 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:59898.service. Feb 12 20:27:14.592259 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 59898 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:14.593317 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:14.596598 systemd-logind[1172]: New session 9 of user core. Feb 12 20:27:14.597510 systemd[1]: Started session-9.scope. Feb 12 20:27:14.700346 sshd[3576]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:14.702463 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:59898.service: Deactivated successfully. Feb 12 20:27:14.703823 systemd-logind[1172]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:27:14.703845 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:27:14.704470 systemd-logind[1172]: Removed session 9. Feb 12 20:27:19.703595 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:59906.service. Feb 12 20:27:19.733571 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 59906 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:19.734484 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:19.737638 systemd-logind[1172]: New session 10 of user core. Feb 12 20:27:19.738551 systemd[1]: Started session-10.scope. Feb 12 20:27:19.853552 sshd[3591]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:19.856128 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:59906.service: Deactivated successfully. Feb 12 20:27:19.857158 systemd-logind[1172]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:27:19.857204 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:27:19.857942 systemd-logind[1172]: Removed session 10. Feb 12 20:27:24.857376 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:38568.service. Feb 12 20:27:24.887465 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 38568 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:24.888511 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:24.891959 systemd-logind[1172]: New session 11 of user core. Feb 12 20:27:24.892770 systemd[1]: Started session-11.scope. Feb 12 20:27:25.003438 sshd[3606]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:25.005776 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:38570.service. Feb 12 20:27:25.006424 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:38568.service: Deactivated successfully. Feb 12 20:27:25.007321 systemd-logind[1172]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:27:25.007473 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:27:25.008776 systemd-logind[1172]: Removed session 11. Feb 12 20:27:25.034720 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:25.035796 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:25.039047 systemd-logind[1172]: New session 12 of user core. Feb 12 20:27:25.039788 systemd[1]: Started session-12.scope. Feb 12 20:27:25.743555 sshd[3619]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:25.745733 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:38572.service. Feb 12 20:27:25.756455 systemd-logind[1172]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:27:25.757577 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:38570.service: Deactivated successfully. Feb 12 20:27:25.758309 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:27:25.759334 systemd-logind[1172]: Removed session 12. Feb 12 20:27:25.784514 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:25.785511 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:25.788944 systemd-logind[1172]: New session 13 of user core. Feb 12 20:27:25.789851 systemd[1]: Started session-13.scope. Feb 12 20:27:25.898422 sshd[3631]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:25.900670 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:38572.service: Deactivated successfully. Feb 12 20:27:25.901605 systemd-logind[1172]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:27:25.901617 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:27:25.902637 systemd-logind[1172]: Removed session 13. Feb 12 20:27:30.901804 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:38578.service. Feb 12 20:27:30.930005 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 38578 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:30.930968 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:30.934340 systemd-logind[1172]: New session 14 of user core. Feb 12 20:27:30.935309 systemd[1]: Started session-14.scope. Feb 12 20:27:31.040160 sshd[3651]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:31.042549 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:38578.service: Deactivated successfully. Feb 12 20:27:31.043687 systemd-logind[1172]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:27:31.043751 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:27:31.044525 systemd-logind[1172]: Removed session 14. Feb 12 20:27:36.043276 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:57146.service. Feb 12 20:27:36.072997 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 57146 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:36.074056 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:36.077298 systemd-logind[1172]: New session 15 of user core. Feb 12 20:27:36.078081 systemd[1]: Started session-15.scope. Feb 12 20:27:36.176135 sshd[3665]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:36.178963 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:57156.service. Feb 12 20:27:36.179865 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:57146.service: Deactivated successfully. Feb 12 20:27:36.180496 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:27:36.181398 systemd-logind[1172]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:27:36.182074 systemd-logind[1172]: Removed session 15. Feb 12 20:27:36.208663 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 57156 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:36.209625 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:36.213043 systemd-logind[1172]: New session 16 of user core. Feb 12 20:27:36.213879 systemd[1]: Started session-16.scope. Feb 12 20:27:36.385741 sshd[3678]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:36.388140 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:57170.service. Feb 12 20:27:36.388582 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:57156.service: Deactivated successfully. Feb 12 20:27:36.389252 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:27:36.390425 systemd-logind[1172]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:27:36.391189 systemd-logind[1172]: Removed session 16. Feb 12 20:27:36.417659 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 57170 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:36.418599 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:36.421800 systemd-logind[1172]: New session 17 of user core. Feb 12 20:27:36.422510 systemd[1]: Started session-17.scope. Feb 12 20:27:37.247447 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:37.249868 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:57182.service. Feb 12 20:27:37.250643 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:57170.service: Deactivated successfully. Feb 12 20:27:37.251750 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:27:37.252401 systemd-logind[1172]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:27:37.253522 systemd-logind[1172]: Removed session 17. Feb 12 20:27:37.287880 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 57182 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:37.289223 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:37.293342 systemd[1]: Started session-18.scope. Feb 12 20:27:37.293951 systemd-logind[1172]: New session 18 of user core. Feb 12 20:27:37.521860 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:57188.service. Feb 12 20:27:37.522727 sshd[3721]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:37.526471 systemd-logind[1172]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:27:37.526667 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:57182.service: Deactivated successfully. Feb 12 20:27:37.527721 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:27:37.528307 systemd-logind[1172]: Removed session 18. Feb 12 20:27:37.553658 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 57188 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:37.554852 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:37.558236 systemd-logind[1172]: New session 19 of user core. Feb 12 20:27:37.559033 systemd[1]: Started session-19.scope. Feb 12 20:27:37.733408 sshd[3772]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:37.735965 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:57188.service: Deactivated successfully. Feb 12 20:27:37.737160 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:27:37.737195 systemd-logind[1172]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:27:37.738198 systemd-logind[1172]: Removed session 19. Feb 12 20:27:40.573151 kubelet[2127]: E0212 20:27:40.573088 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:42.736375 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:57194.service. Feb 12 20:27:42.765548 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 57194 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:42.766525 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:42.769943 systemd-logind[1172]: New session 20 of user core. Feb 12 20:27:42.770744 systemd[1]: Started session-20.scope. Feb 12 20:27:42.872945 sshd[3790]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:42.874950 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:57194.service: Deactivated successfully. Feb 12 20:27:42.875909 systemd-logind[1172]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:27:42.875924 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:27:42.876629 systemd-logind[1172]: Removed session 20. Feb 12 20:27:45.572577 kubelet[2127]: E0212 20:27:45.572517 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:47.877419 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:38072.service. Feb 12 20:27:47.908775 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 38072 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:47.909741 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:47.912907 systemd-logind[1172]: New session 21 of user core. Feb 12 20:27:47.913874 systemd[1]: Started session-21.scope. Feb 12 20:27:48.012199 sshd[3833]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:48.014069 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:38072.service: Deactivated successfully. Feb 12 20:27:48.014960 systemd-logind[1172]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:27:48.014973 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:27:48.015725 systemd-logind[1172]: Removed session 21. Feb 12 20:27:49.572859 kubelet[2127]: E0212 20:27:49.572823 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:27:53.015294 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:38084.service. Feb 12 20:27:53.046238 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 38084 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:53.047441 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:53.051449 systemd-logind[1172]: New session 22 of user core. Feb 12 20:27:53.052333 systemd[1]: Started session-22.scope. Feb 12 20:27:53.158509 sshd[3847]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:53.161556 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:38084.service: Deactivated successfully. Feb 12 20:27:53.162950 systemd-logind[1172]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:27:53.163028 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:27:53.164100 systemd-logind[1172]: Removed session 22. Feb 12 20:27:58.162480 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:43454.service. Feb 12 20:27:58.193822 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:58.195317 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:58.199381 systemd-logind[1172]: New session 23 of user core. Feb 12 20:27:58.200527 systemd[1]: Started session-23.scope. Feb 12 20:27:58.309358 sshd[3861]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:58.311837 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:43462.service. Feb 12 20:27:58.312587 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:43454.service: Deactivated successfully. Feb 12 20:27:58.313428 systemd-logind[1172]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:27:58.313497 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:27:58.314414 systemd-logind[1172]: Removed session 23. Feb 12 20:27:58.343683 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 43462 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:27:58.344650 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:58.348082 systemd-logind[1172]: New session 24 of user core. Feb 12 20:27:58.349137 systemd[1]: Started session-24.scope. Feb 12 20:27:59.802471 env[1189]: time="2024-02-12T20:27:59.802424431Z" level=info msg="StopContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" with timeout 30 (s)" Feb 12 20:27:59.802857 env[1189]: time="2024-02-12T20:27:59.802823407Z" level=info msg="Stop container \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" with signal terminated" Feb 12 20:27:59.810554 systemd[1]: run-containerd-runc-k8s.io-8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de-runc.YBS8Ks.mount: Deactivated successfully. Feb 12 20:27:59.826668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322-rootfs.mount: Deactivated successfully. Feb 12 20:27:59.828034 env[1189]: time="2024-02-12T20:27:59.827975545Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:27:59.833417 env[1189]: time="2024-02-12T20:27:59.833376305Z" level=info msg="StopContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" with timeout 1 (s)" Feb 12 20:27:59.833707 env[1189]: time="2024-02-12T20:27:59.833680902Z" level=info msg="Stop container \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" with signal terminated" Feb 12 20:27:59.839473 systemd-networkd[1084]: lxc_health: Link DOWN Feb 12 20:27:59.839481 systemd-networkd[1084]: lxc_health: Lost carrier Feb 12 20:27:59.841382 env[1189]: time="2024-02-12T20:27:59.841151926Z" level=info msg="shim disconnected" id=bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322 Feb 12 20:27:59.841382 env[1189]: time="2024-02-12T20:27:59.841289277Z" level=warning msg="cleaning up after shim disconnected" id=bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322 namespace=k8s.io Feb 12 20:27:59.841382 env[1189]: time="2024-02-12T20:27:59.841307632Z" level=info msg="cleaning up dead shim" Feb 12 20:27:59.848814 env[1189]: time="2024-02-12T20:27:59.848760352Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3933 runtime=io.containerd.runc.v2\n" Feb 12 20:27:59.852129 env[1189]: time="2024-02-12T20:27:59.852093451Z" level=info msg="StopContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" returns successfully" Feb 12 20:27:59.853862 env[1189]: time="2024-02-12T20:27:59.853835955Z" level=info msg="StopPodSandbox for \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\"" Feb 12 20:27:59.853923 env[1189]: time="2024-02-12T20:27:59.853907751Z" level=info msg="Container to stop \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:59.855734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0-shm.mount: Deactivated successfully. Feb 12 20:27:59.885212 env[1189]: time="2024-02-12T20:27:59.885160515Z" level=info msg="shim disconnected" id=b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0 Feb 12 20:27:59.886211 env[1189]: time="2024-02-12T20:27:59.886129953Z" level=warning msg="cleaning up after shim disconnected" id=b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0 namespace=k8s.io Feb 12 20:27:59.886211 env[1189]: time="2024-02-12T20:27:59.886158397Z" level=info msg="cleaning up dead shim" Feb 12 20:27:59.893110 env[1189]: time="2024-02-12T20:27:59.893056385Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3982 runtime=io.containerd.runc.v2\n" Feb 12 20:27:59.893809 env[1189]: time="2024-02-12T20:27:59.893780859Z" level=info msg="TearDown network for sandbox \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\" successfully" Feb 12 20:27:59.893872 env[1189]: time="2024-02-12T20:27:59.893811528Z" level=info msg="StopPodSandbox for \"b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0\" returns successfully" Feb 12 20:27:59.914111 env[1189]: time="2024-02-12T20:27:59.914067130Z" level=info msg="shim disconnected" id=8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de Feb 12 20:27:59.914111 env[1189]: time="2024-02-12T20:27:59.914107998Z" level=warning msg="cleaning up after shim disconnected" id=8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de namespace=k8s.io Feb 12 20:27:59.914235 env[1189]: time="2024-02-12T20:27:59.914118588Z" level=info msg="cleaning up dead shim" Feb 12 20:27:59.919646 env[1189]: time="2024-02-12T20:27:59.919610140Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n" Feb 12 20:28:00.012301 env[1189]: time="2024-02-12T20:28:00.012259655Z" level=info msg="StopContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" returns successfully" Feb 12 20:28:00.012678 env[1189]: time="2024-02-12T20:28:00.012659954Z" level=info msg="StopPodSandbox for \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\"" Feb 12 20:28:00.012760 env[1189]: time="2024-02-12T20:28:00.012714007Z" level=info msg="Container to stop \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:00.012760 env[1189]: time="2024-02-12T20:28:00.012725358Z" level=info msg="Container to stop \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:00.012760 env[1189]: time="2024-02-12T20:28:00.012734435Z" level=info msg="Container to stop \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:00.012760 env[1189]: time="2024-02-12T20:28:00.012744645Z" level=info msg="Container to stop \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:00.012760 env[1189]: time="2024-02-12T20:28:00.012753371Z" level=info msg="Container to stop \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:00.087073 kubelet[2127]: I0212 20:28:00.086969 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48ab623e-d593-463a-ae71-b59e46f49269-cilium-config-path\") pod \"48ab623e-d593-463a-ae71-b59e46f49269\" (UID: \"48ab623e-d593-463a-ae71-b59e46f49269\") " Feb 12 20:28:00.087073 kubelet[2127]: I0212 20:28:00.087022 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkm7p\" (UniqueName: \"kubernetes.io/projected/48ab623e-d593-463a-ae71-b59e46f49269-kube-api-access-pkm7p\") pod \"48ab623e-d593-463a-ae71-b59e46f49269\" (UID: \"48ab623e-d593-463a-ae71-b59e46f49269\") " Feb 12 20:28:00.087592 kubelet[2127]: W0212 20:28:00.087266 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/48ab623e-d593-463a-ae71-b59e46f49269/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:28:00.088800 env[1189]: time="2024-02-12T20:28:00.088746810Z" level=info msg="shim disconnected" id=65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97 Feb 12 20:28:00.088904 env[1189]: time="2024-02-12T20:28:00.088797176Z" level=warning msg="cleaning up after shim disconnected" id=65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97 namespace=k8s.io Feb 12 20:28:00.088904 env[1189]: time="2024-02-12T20:28:00.088809639Z" level=info msg="cleaning up dead shim" Feb 12 20:28:00.089524 kubelet[2127]: I0212 20:28:00.089495 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ab623e-d593-463a-ae71-b59e46f49269-kube-api-access-pkm7p" (OuterVolumeSpecName: "kube-api-access-pkm7p") pod "48ab623e-d593-463a-ae71-b59e46f49269" (UID: "48ab623e-d593-463a-ae71-b59e46f49269"). InnerVolumeSpecName "kube-api-access-pkm7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:00.089524 kubelet[2127]: I0212 20:28:00.089507 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ab623e-d593-463a-ae71-b59e46f49269-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48ab623e-d593-463a-ae71-b59e46f49269" (UID: "48ab623e-d593-463a-ae71-b59e46f49269"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:28:00.094291 env[1189]: time="2024-02-12T20:28:00.094246625Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Feb 12 20:28:00.094562 env[1189]: time="2024-02-12T20:28:00.094531063Z" level=info msg="TearDown network for sandbox \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" successfully" Feb 12 20:28:00.094609 env[1189]: time="2024-02-12T20:28:00.094561662Z" level=info msg="StopPodSandbox for \"65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97\" returns successfully" Feb 12 20:28:00.187602 kubelet[2127]: I0212 20:28:00.187568 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-pkm7p\" (UniqueName: \"kubernetes.io/projected/48ab623e-d593-463a-ae71-b59e46f49269-kube-api-access-pkm7p\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.187602 kubelet[2127]: I0212 20:28:00.187598 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48ab623e-d593-463a-ae71-b59e46f49269-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.288313 kubelet[2127]: I0212 20:28:00.288279 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-bpf-maps\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288313 kubelet[2127]: I0212 20:28:00.288315 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cni-path\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288313 kubelet[2127]: I0212 20:28:00.288334 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-run\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288548 kubelet[2127]: I0212 20:28:00.288354 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-net\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288548 kubelet[2127]: I0212 20:28:00.288372 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-cgroup\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288548 kubelet[2127]: I0212 20:28:00.288396 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bc5992b-016d-441e-9441-699452c72f58-clustermesh-secrets\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288548 kubelet[2127]: I0212 20:28:00.288395 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288548 kubelet[2127]: I0212 20:28:00.288414 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbpc\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-kube-api-access-xzbpc\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288439 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288471 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bc5992b-016d-441e-9441-699452c72f58-cilium-config-path\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288492 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-xtables-lock\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288508 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-lib-modules\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288503 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cni-path" (OuterVolumeSpecName: "cni-path") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288666 kubelet[2127]: I0212 20:28:00.288524 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-etc-cni-netd\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288531 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288545 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-kernel\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288563 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-hostproc\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288580 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-hubble-tls\") pod \"7bc5992b-016d-441e-9441-699452c72f58\" (UID: \"7bc5992b-016d-441e-9441-699452c72f58\") " Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288610 2127 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288619 2127 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.288804 kubelet[2127]: I0212 20:28:00.288628 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.288970 kubelet[2127]: I0212 20:28:00.288637 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.288970 kubelet[2127]: W0212 20:28:00.288739 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7bc5992b-016d-441e-9441-699452c72f58/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:28:00.288970 kubelet[2127]: I0212 20:28:00.288831 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288970 kubelet[2127]: I0212 20:28:00.288865 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.288970 kubelet[2127]: I0212 20:28:00.288886 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.289082 kubelet[2127]: I0212 20:28:00.288913 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.289082 kubelet[2127]: I0212 20:28:00.288934 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.289128 kubelet[2127]: I0212 20:28:00.289115 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-hostproc" (OuterVolumeSpecName: "hostproc") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:00.290865 kubelet[2127]: I0212 20:28:00.290835 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:00.291024 kubelet[2127]: I0212 20:28:00.290939 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bc5992b-016d-441e-9441-699452c72f58-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:28:00.291081 kubelet[2127]: I0212 20:28:00.291057 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-kube-api-access-xzbpc" (OuterVolumeSpecName: "kube-api-access-xzbpc") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "kube-api-access-xzbpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:00.291603 kubelet[2127]: I0212 20:28:00.291574 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc5992b-016d-441e-9441-699452c72f58-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7bc5992b-016d-441e-9441-699452c72f58" (UID: "7bc5992b-016d-441e-9441-699452c72f58"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389017 2127 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bc5992b-016d-441e-9441-699452c72f58-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389058 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-xzbpc\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-kube-api-access-xzbpc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389070 2127 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389079 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bc5992b-016d-441e-9441-699452c72f58-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389088 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389098 2127 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389106 2127 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bc5992b-016d-441e-9441-699452c72f58-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389206 kubelet[2127]: I0212 20:28:00.389114 2127 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389618 kubelet[2127]: I0212 20:28:00.389122 2127 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.389618 kubelet[2127]: I0212 20:28:00.389130 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bc5992b-016d-441e-9441-699452c72f58-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:00.773650 kubelet[2127]: I0212 20:28:00.773546 2127 scope.go:115] "RemoveContainer" containerID="8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de" Feb 12 20:28:00.774939 env[1189]: time="2024-02-12T20:28:00.774887756Z" level=info msg="RemoveContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\"" Feb 12 20:28:00.778856 env[1189]: time="2024-02-12T20:28:00.778824408Z" level=info msg="RemoveContainer for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" returns successfully" Feb 12 20:28:00.779117 kubelet[2127]: I0212 20:28:00.779080 2127 scope.go:115] "RemoveContainer" containerID="1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08" Feb 12 20:28:00.781800 env[1189]: time="2024-02-12T20:28:00.781753850Z" level=info msg="RemoveContainer for \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\"" Feb 12 20:28:00.785844 env[1189]: time="2024-02-12T20:28:00.785795832Z" level=info msg="RemoveContainer for \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\" returns successfully" Feb 12 20:28:00.786030 kubelet[2127]: I0212 20:28:00.786001 2127 scope.go:115] "RemoveContainer" containerID="020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0" Feb 12 20:28:00.787228 env[1189]: time="2024-02-12T20:28:00.787202649Z" level=info msg="RemoveContainer for \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\"" Feb 12 20:28:00.790708 env[1189]: time="2024-02-12T20:28:00.790632030Z" level=info msg="RemoveContainer for \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\" returns successfully" Feb 12 20:28:00.790993 kubelet[2127]: I0212 20:28:00.790963 2127 scope.go:115] "RemoveContainer" containerID="075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962" Feb 12 20:28:00.792551 env[1189]: time="2024-02-12T20:28:00.792500141Z" level=info msg="RemoveContainer for \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\"" Feb 12 20:28:00.796856 env[1189]: time="2024-02-12T20:28:00.796814859Z" level=info msg="RemoveContainer for \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\" returns successfully" Feb 12 20:28:00.797562 kubelet[2127]: I0212 20:28:00.797487 2127 scope.go:115] "RemoveContainer" containerID="b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518" Feb 12 20:28:00.798743 env[1189]: time="2024-02-12T20:28:00.798677951Z" level=info msg="RemoveContainer for \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\"" Feb 12 20:28:00.801652 env[1189]: time="2024-02-12T20:28:00.801622472Z" level=info msg="RemoveContainer for \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\" returns successfully" Feb 12 20:28:00.802208 kubelet[2127]: I0212 20:28:00.802167 2127 scope.go:115] "RemoveContainer" containerID="8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de" Feb 12 20:28:00.803338 env[1189]: time="2024-02-12T20:28:00.803256319Z" level=error msg="ContainerStatus for \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\": not found" Feb 12 20:28:00.803817 kubelet[2127]: E0212 20:28:00.803790 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\": not found" containerID="8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de" Feb 12 20:28:00.803946 kubelet[2127]: I0212 20:28:00.803841 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de} err="failed to get container status \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\": rpc error: code = NotFound desc = an error occurred when try to find container \"8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de\": not found" Feb 12 20:28:00.803946 kubelet[2127]: I0212 20:28:00.803861 2127 scope.go:115] "RemoveContainer" containerID="1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08" Feb 12 20:28:00.807411 env[1189]: time="2024-02-12T20:28:00.804019826Z" level=error msg="ContainerStatus for \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\": not found" Feb 12 20:28:00.806251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8042477bc3cfee6fa417855ab763f0641e874efa2f825d1ae1ce58e046c583de-rootfs.mount: Deactivated successfully. Feb 12 20:28:00.806424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b675eb5ba73959fe1f82b70ce36e7ecaf8aee86309721f5b185ef6430c21b7f0-rootfs.mount: Deactivated successfully. Feb 12 20:28:00.806629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97-rootfs.mount: Deactivated successfully. Feb 12 20:28:00.806757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65ae821067a90ee797bb98bd3a38b049e7a468f20cf45f1121c0e6fe25450a97-shm.mount: Deactivated successfully. Feb 12 20:28:00.806881 systemd[1]: var-lib-kubelet-pods-7bc5992b\x2d016d\x2d441e\x2d9441\x2d699452c72f58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzbpc.mount: Deactivated successfully. Feb 12 20:28:00.807019 systemd[1]: var-lib-kubelet-pods-48ab623e\x2dd593\x2d463a\x2dae71\x2db59e46f49269-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkm7p.mount: Deactivated successfully. Feb 12 20:28:00.807307 systemd[1]: var-lib-kubelet-pods-7bc5992b\x2d016d\x2d441e\x2d9441\x2d699452c72f58-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:28:00.807561 systemd[1]: var-lib-kubelet-pods-7bc5992b\x2d016d\x2d441e\x2d9441\x2d699452c72f58-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:00.809395 kubelet[2127]: E0212 20:28:00.809375 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\": not found" containerID="1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08" Feb 12 20:28:00.809574 kubelet[2127]: I0212 20:28:00.809553 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08} err="failed to get container status \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c3f6e3355ebf7cca98655504c627e59ce8cd153c32e70350e874deb6a82ce08\": not found" Feb 12 20:28:00.809650 kubelet[2127]: I0212 20:28:00.809579 2127 scope.go:115] "RemoveContainer" containerID="020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0" Feb 12 20:28:00.809959 env[1189]: time="2024-02-12T20:28:00.809867161Z" level=error msg="ContainerStatus for \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\": not found" Feb 12 20:28:00.810168 kubelet[2127]: E0212 20:28:00.810124 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\": not found" containerID="020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0" Feb 12 20:28:00.810319 kubelet[2127]: I0212 20:28:00.810307 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0} err="failed to get container status \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"020b8c998e88029e3bc795a4632512ee5452c86b1ae3bc648aad537ed63292e0\": not found" Feb 12 20:28:00.810438 kubelet[2127]: I0212 20:28:00.810417 2127 scope.go:115] "RemoveContainer" containerID="075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962" Feb 12 20:28:00.810658 env[1189]: time="2024-02-12T20:28:00.810609227Z" level=error msg="ContainerStatus for \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\": not found" Feb 12 20:28:00.810774 kubelet[2127]: E0212 20:28:00.810758 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\": not found" containerID="075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962" Feb 12 20:28:00.810857 kubelet[2127]: I0212 20:28:00.810790 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962} err="failed to get container status \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\": rpc error: code = NotFound desc = an error occurred when try to find container \"075b975f7af057059c380a8114a1daa700aa2d59e81b6bad81c5a4baa92b3962\": not found" Feb 12 20:28:00.810857 kubelet[2127]: I0212 20:28:00.810806 2127 scope.go:115] "RemoveContainer" containerID="b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518" Feb 12 20:28:00.811085 kubelet[2127]: E0212 20:28:00.811061 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\": not found" containerID="b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518" Feb 12 20:28:00.811085 kubelet[2127]: I0212 20:28:00.811084 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518} err="failed to get container status \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\": rpc error: code = NotFound desc = an error occurred when try to find container \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\": not found" Feb 12 20:28:00.811288 env[1189]: time="2024-02-12T20:28:00.810936227Z" level=error msg="ContainerStatus for \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b997c398dcf4c9ecfca19a1d6c8f03dba9eeae8ff54ea817319938a682f20518\": not found" Feb 12 20:28:00.811342 kubelet[2127]: I0212 20:28:00.811094 2127 scope.go:115] "RemoveContainer" containerID="bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322" Feb 12 20:28:00.812397 env[1189]: time="2024-02-12T20:28:00.812371888Z" level=info msg="RemoveContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\"" Feb 12 20:28:00.815807 env[1189]: time="2024-02-12T20:28:00.815777824Z" level=info msg="RemoveContainer for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" returns successfully" Feb 12 20:28:00.815942 kubelet[2127]: I0212 20:28:00.815924 2127 scope.go:115] "RemoveContainer" containerID="bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322" Feb 12 20:28:00.816187 env[1189]: time="2024-02-12T20:28:00.816108781Z" level=error msg="ContainerStatus for \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\": not found" Feb 12 20:28:00.816299 kubelet[2127]: E0212 20:28:00.816283 2127 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\": not found" containerID="bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322" Feb 12 20:28:00.816353 kubelet[2127]: I0212 20:28:00.816320 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322} err="failed to get container status \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdede83d6f51abfbcf496048afaebe4ff44b4900f1195baa853557def7ef8322\": not found" Feb 12 20:28:01.575049 kubelet[2127]: I0212 20:28:01.575012 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=48ab623e-d593-463a-ae71-b59e46f49269 path="/var/lib/kubelet/pods/48ab623e-d593-463a-ae71-b59e46f49269/volumes" Feb 12 20:28:01.575466 kubelet[2127]: I0212 20:28:01.575379 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7bc5992b-016d-441e-9441-699452c72f58 path="/var/lib/kubelet/pods/7bc5992b-016d-441e-9441-699452c72f58/volumes" Feb 12 20:28:01.646359 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 12 20:28:01.648669 systemd[1]: Started sshd@24-10.0.0.91:22-10.0.0.1:43476.service. Feb 12 20:28:01.649081 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:43462.service: Deactivated successfully. Feb 12 20:28:01.649993 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:28:01.650527 systemd-logind[1172]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:28:01.651352 systemd-logind[1172]: Removed session 24. Feb 12 20:28:01.682145 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 43476 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:28:01.683270 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:28:01.686912 systemd-logind[1172]: New session 25 of user core. Feb 12 20:28:01.688000 systemd[1]: Started session-25.scope. Feb 12 20:28:02.111981 systemd[1]: Started sshd@25-10.0.0.91:22-10.0.0.1:43484.service. Feb 12 20:28:02.123437 sshd[4043]: pam_unix(sshd:session): session closed for user core Feb 12 20:28:02.123889 kubelet[2127]: I0212 20:28:02.123861 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:28:02.124034 kubelet[2127]: E0212 20:28:02.124016 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="mount-cgroup" Feb 12 20:28:02.124149 kubelet[2127]: E0212 20:28:02.124122 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="apply-sysctl-overwrites" Feb 12 20:28:02.124257 kubelet[2127]: E0212 20:28:02.124241 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48ab623e-d593-463a-ae71-b59e46f49269" containerName="cilium-operator" Feb 12 20:28:02.124347 kubelet[2127]: E0212 20:28:02.124331 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="clean-cilium-state" Feb 12 20:28:02.124433 kubelet[2127]: E0212 20:28:02.124417 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="mount-bpf-fs" Feb 12 20:28:02.124525 kubelet[2127]: E0212 20:28:02.124508 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="cilium-agent" Feb 12 20:28:02.124637 kubelet[2127]: I0212 20:28:02.124621 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="48ab623e-d593-463a-ae71-b59e46f49269" containerName="cilium-operator" Feb 12 20:28:02.124726 kubelet[2127]: I0212 20:28:02.124710 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="7bc5992b-016d-441e-9441-699452c72f58" containerName="cilium-agent" Feb 12 20:28:02.129542 systemd-logind[1172]: Session 25 logged out. Waiting for processes to exit. Feb 12 20:28:02.136668 systemd[1]: sshd@24-10.0.0.91:22-10.0.0.1:43476.service: Deactivated successfully. Feb 12 20:28:02.137648 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 20:28:02.139133 systemd-logind[1172]: Removed session 25. Feb 12 20:28:02.163195 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 43484 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:28:02.164327 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:28:02.168188 systemd-logind[1172]: New session 26 of user core. Feb 12 20:28:02.168554 systemd[1]: Started session-26.scope. Feb 12 20:28:02.276197 sshd[4057]: pam_unix(sshd:session): session closed for user core Feb 12 20:28:02.278366 systemd[1]: Started sshd@26-10.0.0.91:22-10.0.0.1:43492.service. Feb 12 20:28:02.278769 systemd[1]: sshd@25-10.0.0.91:22-10.0.0.1:43484.service: Deactivated successfully. Feb 12 20:28:02.279640 systemd-logind[1172]: Session 26 logged out. Waiting for processes to exit. Feb 12 20:28:02.279773 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 20:28:02.281351 systemd-logind[1172]: Removed session 26. Feb 12 20:28:02.298301 kubelet[2127]: I0212 20:28:02.298279 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-bpf-maps\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298469 kubelet[2127]: I0212 20:28:02.298316 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-xtables-lock\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298469 kubelet[2127]: I0212 20:28:02.298340 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-etc-cni-netd\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298469 kubelet[2127]: I0212 20:28:02.298366 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hostproc\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298469 kubelet[2127]: I0212 20:28:02.298432 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-config-path\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298615 kubelet[2127]: I0212 20:28:02.298479 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-lib-modules\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298615 kubelet[2127]: I0212 20:28:02.298538 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-clustermesh-secrets\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298615 kubelet[2127]: I0212 20:28:02.298582 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hubble-tls\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298705 kubelet[2127]: I0212 20:28:02.298632 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw4q\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-kube-api-access-8hw4q\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298705 kubelet[2127]: I0212 20:28:02.298685 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-net\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298768 kubelet[2127]: I0212 20:28:02.298713 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-run\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298768 kubelet[2127]: I0212 20:28:02.298731 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-cgroup\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298768 kubelet[2127]: I0212 20:28:02.298747 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-ipsec-secrets\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298868 kubelet[2127]: I0212 20:28:02.298793 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cni-path\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.298868 kubelet[2127]: I0212 20:28:02.298826 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-kernel\") pod \"cilium-zb4mj\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " pod="kube-system/cilium-zb4mj" Feb 12 20:28:02.308248 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:28:02.309227 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:28:02.312161 systemd-logind[1172]: New session 27 of user core. Feb 12 20:28:02.312883 systemd[1]: Started session-27.scope. Feb 12 20:28:02.572596 kubelet[2127]: E0212 20:28:02.572564 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:02.662541 kubelet[2127]: E0212 20:28:02.662513 2127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:28:02.741313 kubelet[2127]: E0212 20:28:02.741282 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:02.741749 env[1189]: time="2024-02-12T20:28:02.741713559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb4mj,Uid:b0ee0238-4754-4ebf-8a31-574c6afc6d28,Namespace:kube-system,Attempt:0,}" Feb 12 20:28:03.019393 env[1189]: time="2024-02-12T20:28:03.019263484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:28:03.019393 env[1189]: time="2024-02-12T20:28:03.019303179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:28:03.019560 env[1189]: time="2024-02-12T20:28:03.019316184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:28:03.019560 env[1189]: time="2024-02-12T20:28:03.019440108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462 pid=4096 runtime=io.containerd.runc.v2 Feb 12 20:28:03.048772 env[1189]: time="2024-02-12T20:28:03.047951960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zb4mj,Uid:b0ee0238-4754-4ebf-8a31-574c6afc6d28,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\"" Feb 12 20:28:03.048925 kubelet[2127]: E0212 20:28:03.048650 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:03.051561 env[1189]: time="2024-02-12T20:28:03.051509038Z" level=info msg="CreateContainer within sandbox \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:28:03.064062 env[1189]: time="2024-02-12T20:28:03.063967628Z" level=info msg="CreateContainer within sandbox \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\"" Feb 12 20:28:03.064560 env[1189]: time="2024-02-12T20:28:03.064525406Z" level=info msg="StartContainer for \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\"" Feb 12 20:28:03.106826 env[1189]: time="2024-02-12T20:28:03.106764421Z" level=info msg="StartContainer for \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\" returns successfully" Feb 12 20:28:03.170734 env[1189]: time="2024-02-12T20:28:03.170671843Z" level=info msg="shim disconnected" id=24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726 Feb 12 20:28:03.170734 env[1189]: time="2024-02-12T20:28:03.170730504Z" level=warning msg="cleaning up after shim disconnected" id=24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726 namespace=k8s.io Feb 12 20:28:03.170734 env[1189]: time="2024-02-12T20:28:03.170742096Z" level=info msg="cleaning up dead shim" Feb 12 20:28:03.177547 env[1189]: time="2024-02-12T20:28:03.177507963Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Feb 12 20:28:03.783294 env[1189]: time="2024-02-12T20:28:03.783246970Z" level=info msg="StopPodSandbox for \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\"" Feb 12 20:28:03.783794 env[1189]: time="2024-02-12T20:28:03.783304409Z" level=info msg="Container to stop \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:28:03.785209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462-shm.mount: Deactivated successfully. Feb 12 20:28:03.804944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462-rootfs.mount: Deactivated successfully. Feb 12 20:28:03.831327 env[1189]: time="2024-02-12T20:28:03.831273548Z" level=info msg="shim disconnected" id=6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462 Feb 12 20:28:03.831327 env[1189]: time="2024-02-12T20:28:03.831324134Z" level=warning msg="cleaning up after shim disconnected" id=6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462 namespace=k8s.io Feb 12 20:28:03.831501 env[1189]: time="2024-02-12T20:28:03.831335055Z" level=info msg="cleaning up dead shim" Feb 12 20:28:03.837214 env[1189]: time="2024-02-12T20:28:03.837155020Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\n" Feb 12 20:28:03.837511 env[1189]: time="2024-02-12T20:28:03.837481669Z" level=info msg="TearDown network for sandbox \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" successfully" Feb 12 20:28:03.837560 env[1189]: time="2024-02-12T20:28:03.837512627Z" level=info msg="StopPodSandbox for \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" returns successfully" Feb 12 20:28:04.010567 kubelet[2127]: I0212 20:28:04.010502 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-ipsec-secrets\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.010567 kubelet[2127]: I0212 20:28:04.010569 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hubble-tls\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010599 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-xtables-lock\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010620 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-kernel\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010640 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cni-path\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010660 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-cgroup\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010682 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-net\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011089 kubelet[2127]: I0212 20:28:04.010702 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-run\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010727 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-lib-modules\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010754 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hw4q\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-kube-api-access-8hw4q\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010777 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-bpf-maps\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010800 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-etc-cni-netd\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010835 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hostproc\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011400 kubelet[2127]: I0212 20:28:04.010862 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-clustermesh-secrets\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010888 2127 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-config-path\") pod \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\" (UID: \"b0ee0238-4754-4ebf-8a31-574c6afc6d28\") " Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010888 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010913 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010937 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010950 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.011600 kubelet[2127]: I0212 20:28:04.010965 2127 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.011807 kubelet[2127]: I0212 20:28:04.010981 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011807 kubelet[2127]: I0212 20:28:04.010999 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011807 kubelet[2127]: I0212 20:28:04.011016 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011807 kubelet[2127]: I0212 20:28:04.011031 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011807 kubelet[2127]: I0212 20:28:04.011052 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011962 kubelet[2127]: I0212 20:28:04.011070 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011962 kubelet[2127]: I0212 20:28:04.011087 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:28:04.011962 kubelet[2127]: W0212 20:28:04.011415 2127 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b0ee0238-4754-4ebf-8a31-574c6afc6d28/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:28:04.013721 kubelet[2127]: I0212 20:28:04.013695 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-kube-api-access-8hw4q" (OuterVolumeSpecName: "kube-api-access-8hw4q") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "kube-api-access-8hw4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:04.014222 kubelet[2127]: I0212 20:28:04.014201 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:04.015218 systemd[1]: var-lib-kubelet-pods-b0ee0238\x2d4754\x2d4ebf\x2d8a31\x2d574c6afc6d28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hw4q.mount: Deactivated successfully. Feb 12 20:28:04.015405 kubelet[2127]: I0212 20:28:04.015341 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:28:04.015563 kubelet[2127]: I0212 20:28:04.015530 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:28:04.015702 kubelet[2127]: I0212 20:28:04.015681 2127 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0ee0238-4754-4ebf-8a31-574c6afc6d28" (UID: "b0ee0238-4754-4ebf-8a31-574c6afc6d28"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:28:04.017490 systemd[1]: var-lib-kubelet-pods-b0ee0238\x2d4754\x2d4ebf\x2d8a31\x2d574c6afc6d28-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:28:04.017591 systemd[1]: var-lib-kubelet-pods-b0ee0238\x2d4754\x2d4ebf\x2d8a31\x2d574c6afc6d28-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:04.017677 systemd[1]: var-lib-kubelet-pods-b0ee0238\x2d4754\x2d4ebf\x2d8a31\x2d574c6afc6d28-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111571 2127 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111602 2127 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111613 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111623 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111632 2127 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111640 2127 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111650 2127 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.111687 kubelet[2127]: I0212 20:28:04.111659 2127 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8hw4q\" (UniqueName: \"kubernetes.io/projected/b0ee0238-4754-4ebf-8a31-574c6afc6d28-kube-api-access-8hw4q\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.112130 kubelet[2127]: I0212 20:28:04.111668 2127 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.112130 kubelet[2127]: I0212 20:28:04.111676 2127 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.112130 kubelet[2127]: I0212 20:28:04.111684 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.112130 kubelet[2127]: I0212 20:28:04.111693 2127 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ee0238-4754-4ebf-8a31-574c6afc6d28-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.112130 kubelet[2127]: I0212 20:28:04.111701 2127 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0ee0238-4754-4ebf-8a31-574c6afc6d28-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:28:04.789331 kubelet[2127]: I0212 20:28:04.789297 2127 scope.go:115] "RemoveContainer" containerID="24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726" Feb 12 20:28:04.790418 env[1189]: time="2024-02-12T20:28:04.790364658Z" level=info msg="RemoveContainer for \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\"" Feb 12 20:28:04.832592 env[1189]: time="2024-02-12T20:28:04.832545398Z" level=info msg="RemoveContainer for \"24a2ac1d87a7302b7c1bc15339069d79b34602415b7fd2061e490a722b18f726\" returns successfully" Feb 12 20:28:04.855335 kubelet[2127]: I0212 20:28:04.855268 2127 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:28:04.855335 kubelet[2127]: E0212 20:28:04.855341 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0ee0238-4754-4ebf-8a31-574c6afc6d28" containerName="mount-cgroup" Feb 12 20:28:04.855581 kubelet[2127]: I0212 20:28:04.855373 2127 memory_manager.go:346] "RemoveStaleState removing state" podUID="b0ee0238-4754-4ebf-8a31-574c6afc6d28" containerName="mount-cgroup" Feb 12 20:28:05.016692 kubelet[2127]: I0212 20:28:05.016639 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-bpf-maps\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.016692 kubelet[2127]: I0212 20:28:05.016696 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-etc-cni-netd\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.016725 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d830da5e-0256-4f72-ae55-892be614ee8c-clustermesh-secrets\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.016752 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-host-proc-sys-net\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.016863 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-cilium-run\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.016957 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d830da5e-0256-4f72-ae55-892be614ee8c-cilium-config-path\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.017006 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d830da5e-0256-4f72-ae55-892be614ee8c-cilium-ipsec-secrets\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017115 kubelet[2127]: I0212 20:28:05.017048 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-hostproc\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017288 kubelet[2127]: I0212 20:28:05.017085 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-cni-path\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017288 kubelet[2127]: I0212 20:28:05.017145 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-xtables-lock\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017288 kubelet[2127]: I0212 20:28:05.017195 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d830da5e-0256-4f72-ae55-892be614ee8c-hubble-tls\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017288 kubelet[2127]: I0212 20:28:05.017232 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k868g\" (UniqueName: \"kubernetes.io/projected/d830da5e-0256-4f72-ae55-892be614ee8c-kube-api-access-k868g\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017379 kubelet[2127]: I0212 20:28:05.017299 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-lib-modules\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017379 kubelet[2127]: I0212 20:28:05.017349 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-cilium-cgroup\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.017379 kubelet[2127]: I0212 20:28:05.017376 2127 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d830da5e-0256-4f72-ae55-892be614ee8c-host-proc-sys-kernel\") pod \"cilium-jcmj4\" (UID: \"d830da5e-0256-4f72-ae55-892be614ee8c\") " pod="kube-system/cilium-jcmj4" Feb 12 20:28:05.159124 kubelet[2127]: E0212 20:28:05.158954 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:05.159555 env[1189]: time="2024-02-12T20:28:05.159517347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcmj4,Uid:d830da5e-0256-4f72-ae55-892be614ee8c,Namespace:kube-system,Attempt:0,}" Feb 12 20:28:05.385858 env[1189]: time="2024-02-12T20:28:05.385641016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:28:05.385858 env[1189]: time="2024-02-12T20:28:05.385736787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:28:05.385858 env[1189]: time="2024-02-12T20:28:05.385763548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:28:05.386224 env[1189]: time="2024-02-12T20:28:05.386128338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca pid=4237 runtime=io.containerd.runc.v2 Feb 12 20:28:05.417425 env[1189]: time="2024-02-12T20:28:05.417316025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcmj4,Uid:d830da5e-0256-4f72-ae55-892be614ee8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\"" Feb 12 20:28:05.418355 kubelet[2127]: E0212 20:28:05.418337 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:05.420081 env[1189]: time="2024-02-12T20:28:05.420045955Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:28:05.432960 env[1189]: time="2024-02-12T20:28:05.432914403Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7fbff3d8462f47e1dac2aae2a1ad4171e718de4c72b78b421cf52062f2ccffc\"" Feb 12 20:28:05.434715 env[1189]: time="2024-02-12T20:28:05.433716742Z" level=info msg="StartContainer for \"a7fbff3d8462f47e1dac2aae2a1ad4171e718de4c72b78b421cf52062f2ccffc\"" Feb 12 20:28:05.470410 env[1189]: time="2024-02-12T20:28:05.470366312Z" level=info msg="StartContainer for \"a7fbff3d8462f47e1dac2aae2a1ad4171e718de4c72b78b421cf52062f2ccffc\" returns successfully" Feb 12 20:28:05.496187 env[1189]: time="2024-02-12T20:28:05.496119709Z" level=info msg="shim disconnected" id=a7fbff3d8462f47e1dac2aae2a1ad4171e718de4c72b78b421cf52062f2ccffc Feb 12 20:28:05.496373 env[1189]: time="2024-02-12T20:28:05.496195312Z" level=warning msg="cleaning up after shim disconnected" id=a7fbff3d8462f47e1dac2aae2a1ad4171e718de4c72b78b421cf52062f2ccffc namespace=k8s.io Feb 12 20:28:05.496373 env[1189]: time="2024-02-12T20:28:05.496204970Z" level=info msg="cleaning up dead shim" Feb 12 20:28:05.502532 env[1189]: time="2024-02-12T20:28:05.502470385Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4319 runtime=io.containerd.runc.v2\n" Feb 12 20:28:05.572769 env[1189]: time="2024-02-12T20:28:05.572716353Z" level=info msg="StopPodSandbox for \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\"" Feb 12 20:28:05.572964 env[1189]: time="2024-02-12T20:28:05.572830379Z" level=info msg="TearDown network for sandbox \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" successfully" Feb 12 20:28:05.572964 env[1189]: time="2024-02-12T20:28:05.572870415Z" level=info msg="StopPodSandbox for \"6b478e7e481d9c6b99375006324169e648d81467463d0bffa93b0d363b5a9462\" returns successfully" Feb 12 20:28:05.574059 kubelet[2127]: I0212 20:28:05.574030 2127 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b0ee0238-4754-4ebf-8a31-574c6afc6d28 path="/var/lib/kubelet/pods/b0ee0238-4754-4ebf-8a31-574c6afc6d28/volumes" Feb 12 20:28:05.792568 kubelet[2127]: E0212 20:28:05.792543 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:05.795312 env[1189]: time="2024-02-12T20:28:05.795272736Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:28:05.816043 env[1189]: time="2024-02-12T20:28:05.815969999Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06f5a0cf159c3a22e6d12ba61230a9fe5dc5f0178f1775af2b8681af216f7fd5\"" Feb 12 20:28:05.816609 env[1189]: time="2024-02-12T20:28:05.816568822Z" level=info msg="StartContainer for \"06f5a0cf159c3a22e6d12ba61230a9fe5dc5f0178f1775af2b8681af216f7fd5\"" Feb 12 20:28:05.856587 env[1189]: time="2024-02-12T20:28:05.856519443Z" level=info msg="StartContainer for \"06f5a0cf159c3a22e6d12ba61230a9fe5dc5f0178f1775af2b8681af216f7fd5\" returns successfully" Feb 12 20:28:05.876645 env[1189]: time="2024-02-12T20:28:05.876583897Z" level=info msg="shim disconnected" id=06f5a0cf159c3a22e6d12ba61230a9fe5dc5f0178f1775af2b8681af216f7fd5 Feb 12 20:28:05.876645 env[1189]: time="2024-02-12T20:28:05.876629153Z" level=warning msg="cleaning up after shim disconnected" id=06f5a0cf159c3a22e6d12ba61230a9fe5dc5f0178f1775af2b8681af216f7fd5 namespace=k8s.io Feb 12 20:28:05.876645 env[1189]: time="2024-02-12T20:28:05.876639462Z" level=info msg="cleaning up dead shim" Feb 12 20:28:05.882599 env[1189]: time="2024-02-12T20:28:05.882547841Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4382 runtime=io.containerd.runc.v2\n" Feb 12 20:28:06.797361 kubelet[2127]: E0212 20:28:06.797315 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:06.798866 env[1189]: time="2024-02-12T20:28:06.798821794Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:28:06.971183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939318467.mount: Deactivated successfully. Feb 12 20:28:07.023822 env[1189]: time="2024-02-12T20:28:07.023756485Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e\"" Feb 12 20:28:07.024236 env[1189]: time="2024-02-12T20:28:07.024214071Z" level=info msg="StartContainer for \"429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e\"" Feb 12 20:28:07.062604 env[1189]: time="2024-02-12T20:28:07.062473954Z" level=info msg="StartContainer for \"429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e\" returns successfully" Feb 12 20:28:07.082836 env[1189]: time="2024-02-12T20:28:07.082776976Z" level=info msg="shim disconnected" id=429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e Feb 12 20:28:07.082836 env[1189]: time="2024-02-12T20:28:07.082824015Z" level=warning msg="cleaning up after shim disconnected" id=429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e namespace=k8s.io Feb 12 20:28:07.082836 env[1189]: time="2024-02-12T20:28:07.082833312Z" level=info msg="cleaning up dead shim" Feb 12 20:28:07.089355 env[1189]: time="2024-02-12T20:28:07.089317636Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4438 runtime=io.containerd.runc.v2\n" Feb 12 20:28:07.124345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-429dd2f76d58a64f56f3029b796824e20353cd78852c7c47de3388b30bfdf34e-rootfs.mount: Deactivated successfully. Feb 12 20:28:07.663715 kubelet[2127]: E0212 20:28:07.663682 2127 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:28:07.801404 kubelet[2127]: E0212 20:28:07.801336 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:07.803485 env[1189]: time="2024-02-12T20:28:07.803445218Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:28:07.815262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873231895.mount: Deactivated successfully. Feb 12 20:28:07.817745 env[1189]: time="2024-02-12T20:28:07.817685173Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac\"" Feb 12 20:28:07.818479 env[1189]: time="2024-02-12T20:28:07.818447315Z" level=info msg="StartContainer for \"d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac\"" Feb 12 20:28:07.861450 env[1189]: time="2024-02-12T20:28:07.861393449Z" level=info msg="StartContainer for \"d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac\" returns successfully" Feb 12 20:28:07.878386 env[1189]: time="2024-02-12T20:28:07.878336951Z" level=info msg="shim disconnected" id=d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac Feb 12 20:28:07.878386 env[1189]: time="2024-02-12T20:28:07.878385333Z" level=warning msg="cleaning up after shim disconnected" id=d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac namespace=k8s.io Feb 12 20:28:07.878386 env[1189]: time="2024-02-12T20:28:07.878394921Z" level=info msg="cleaning up dead shim" Feb 12 20:28:07.883711 env[1189]: time="2024-02-12T20:28:07.883669846Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:28:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4494 runtime=io.containerd.runc.v2\n" Feb 12 20:28:08.124294 systemd[1]: run-containerd-runc-k8s.io-d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac-runc.8CNyCR.mount: Deactivated successfully. Feb 12 20:28:08.124453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2aea237558c077c493eab2d517f79d0c2abcd45aa2e1806df7d838834eb33ac-rootfs.mount: Deactivated successfully. Feb 12 20:28:08.804431 kubelet[2127]: E0212 20:28:08.804405 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:08.807062 env[1189]: time="2024-02-12T20:28:08.806956323Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:28:08.821135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160401094.mount: Deactivated successfully. Feb 12 20:28:08.822634 env[1189]: time="2024-02-12T20:28:08.822596854Z" level=info msg="CreateContainer within sandbox \"186c14a5556f92b8d6e1c8b5da7af087fb8e6a8ca1205af2da4d9be6e04cfbca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"903b93b00ccb4656ecbe3d66198cf50471503bf5dfad734562686e5d93382138\"" Feb 12 20:28:08.823121 env[1189]: time="2024-02-12T20:28:08.823090749Z" level=info msg="StartContainer for \"903b93b00ccb4656ecbe3d66198cf50471503bf5dfad734562686e5d93382138\"" Feb 12 20:28:08.863221 env[1189]: time="2024-02-12T20:28:08.863155655Z" level=info msg="StartContainer for \"903b93b00ccb4656ecbe3d66198cf50471503bf5dfad734562686e5d93382138\" returns successfully" Feb 12 20:28:09.128200 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:28:09.785466 kubelet[2127]: I0212 20:28:09.785438 2127 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:28:09.785394249 +0000 UTC m=+102.351850131 LastTransitionTime:2024-02-12 20:28:09.785394249 +0000 UTC m=+102.351850131 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:28:09.808808 kubelet[2127]: E0212 20:28:09.808777 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:10.810582 kubelet[2127]: E0212 20:28:10.810543 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:11.546668 systemd-networkd[1084]: lxc_health: Link UP Feb 12 20:28:11.564878 systemd-networkd[1084]: lxc_health: Gained carrier Feb 12 20:28:11.565231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:28:11.811983 kubelet[2127]: E0212 20:28:11.811879 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:13.161384 kubelet[2127]: E0212 20:28:13.161345 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:13.174498 kubelet[2127]: I0212 20:28:13.174465 2127 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jcmj4" podStartSLOduration=9.174425241 pod.CreationTimestamp="2024-02-12 20:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:28:09.841625392 +0000 UTC m=+102.408081274" watchObservedRunningTime="2024-02-12 20:28:13.174425241 +0000 UTC m=+105.740881123" Feb 12 20:28:13.265312 systemd-networkd[1084]: lxc_health: Gained IPv6LL Feb 12 20:28:13.815050 kubelet[2127]: E0212 20:28:13.815009 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:14.816632 kubelet[2127]: E0212 20:28:14.816603 2127 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:28:15.287582 systemd[1]: run-containerd-runc-k8s.io-903b93b00ccb4656ecbe3d66198cf50471503bf5dfad734562686e5d93382138-runc.yFxlqy.mount: Deactivated successfully. Feb 12 20:28:17.414960 sshd[4072]: pam_unix(sshd:session): session closed for user core Feb 12 20:28:17.417467 systemd[1]: sshd@26-10.0.0.91:22-10.0.0.1:43492.service: Deactivated successfully. Feb 12 20:28:17.418359 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 20:28:17.418361 systemd-logind[1172]: Session 27 logged out. Waiting for processes to exit. Feb 12 20:28:17.419287 systemd-logind[1172]: Removed session 27.