Feb 8 23:28:05.789247 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:28:05.789272 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:05.789283 kernel: BIOS-provided physical RAM map: Feb 8 23:28:05.789290 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:28:05.789297 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:28:05.789305 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:28:05.789314 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 8 23:28:05.789322 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 8 23:28:05.789332 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:28:05.789340 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:28:05.789347 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 8 23:28:05.789355 kernel: NX (Execute Disable) protection: active Feb 8 23:28:05.789362 kernel: SMBIOS 2.8 present. Feb 8 23:28:05.789370 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 8 23:28:05.789382 kernel: Hypervisor detected: KVM Feb 8 23:28:05.789391 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:28:05.789399 kernel: kvm-clock: cpu 0, msr 36faa001, primary cpu clock Feb 8 23:28:05.789407 kernel: kvm-clock: using sched offset of 2270303058 cycles Feb 8 23:28:05.789416 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:28:05.789424 kernel: tsc: Detected 2794.750 MHz processor Feb 8 23:28:05.789433 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:28:05.789442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:28:05.789450 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 8 23:28:05.789461 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:28:05.789469 kernel: Using GB pages for direct mapping Feb 8 23:28:05.789477 kernel: ACPI: Early table checksum verification disabled Feb 8 23:28:05.789485 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 8 23:28:05.789494 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789503 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789511 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789519 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 8 23:28:05.789528 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789539 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789547 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:05.789556 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 8 23:28:05.789565 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 8 23:28:05.789573 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 8 23:28:05.789582 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 8 23:28:05.789590 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 8 23:28:05.789599 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 8 23:28:05.789613 kernel: No NUMA configuration found Feb 8 23:28:05.789622 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 8 23:28:05.789644 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 8 23:28:05.789654 kernel: Zone ranges: Feb 8 23:28:05.789664 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:28:05.789673 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 8 23:28:05.789684 kernel: Normal empty Feb 8 23:28:05.789692 kernel: Movable zone start for each node Feb 8 23:28:05.789700 kernel: Early memory node ranges Feb 8 23:28:05.789709 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:28:05.789718 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 8 23:28:05.789727 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 8 23:28:05.789737 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:28:05.789746 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:28:05.789755 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 8 23:28:05.789766 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:28:05.789776 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:28:05.789796 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:28:05.789809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:28:05.789819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:28:05.789828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:28:05.789838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:28:05.789847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:28:05.789856 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:28:05.789867 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 8 23:28:05.789876 kernel: TSC deadline timer available Feb 8 23:28:05.789885 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 8 23:28:05.789895 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 8 23:28:05.789904 kernel: kvm-guest: setup PV sched yield Feb 8 23:28:05.789914 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 8 23:28:05.789923 kernel: Booting paravirtualized kernel on KVM Feb 8 23:28:05.789939 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:28:05.789957 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 8 23:28:05.789966 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 8 23:28:05.789986 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 8 23:28:05.789995 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 8 23:28:05.790003 kernel: kvm-guest: setup async PF for cpu 0 Feb 8 23:28:05.790019 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 8 23:28:05.790031 kernel: kvm-guest: PV spinlocks enabled Feb 8 23:28:05.790041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:28:05.790050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 8 23:28:05.790060 kernel: Policy zone: DMA32 Feb 8 23:28:05.790079 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:05.790095 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:28:05.790106 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:28:05.790117 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:28:05.790127 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:28:05.790147 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 8 23:28:05.790157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 8 23:28:05.790166 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:28:05.790176 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:28:05.790187 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:28:05.790206 kernel: rcu: RCU event tracing is enabled. Feb 8 23:28:05.790216 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 8 23:28:05.790226 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:28:05.790235 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:28:05.790253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:28:05.790263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 8 23:28:05.790273 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 8 23:28:05.790282 kernel: random: crng init done Feb 8 23:28:05.790293 kernel: Console: colour VGA+ 80x25 Feb 8 23:28:05.790311 kernel: printk: console [ttyS0] enabled Feb 8 23:28:05.790321 kernel: ACPI: Core revision 20210730 Feb 8 23:28:05.790331 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 8 23:28:05.790341 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:28:05.790350 kernel: x2apic enabled Feb 8 23:28:05.790359 kernel: Switched APIC routing to physical x2apic. Feb 8 23:28:05.790369 kernel: kvm-guest: setup PV IPIs Feb 8 23:28:05.790378 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:28:05.790389 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:28:05.790399 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 8 23:28:05.790417 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 8 23:28:05.790427 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 8 23:28:05.790437 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 8 23:28:05.790446 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:28:05.790456 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:28:05.790466 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:28:05.790475 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:28:05.790492 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 8 23:28:05.790502 kernel: RETBleed: Mitigation: untrained return thunk Feb 8 23:28:05.790521 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 8 23:28:05.790533 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 8 23:28:05.790543 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:28:05.790553 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:28:05.790563 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:28:05.790573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:28:05.790584 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 8 23:28:05.790595 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:28:05.790614 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:28:05.790624 kernel: LSM: Security Framework initializing Feb 8 23:28:05.790644 kernel: SELinux: Initializing. Feb 8 23:28:05.790654 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:28:05.790664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:28:05.790674 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 8 23:28:05.790686 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 8 23:28:05.790695 kernel: ... version: 0 Feb 8 23:28:05.790704 kernel: ... bit width: 48 Feb 8 23:28:05.790720 kernel: ... generic registers: 6 Feb 8 23:28:05.790733 kernel: ... value mask: 0000ffffffffffff Feb 8 23:28:05.790742 kernel: ... max period: 00007fffffffffff Feb 8 23:28:05.790751 kernel: ... fixed-purpose events: 0 Feb 8 23:28:05.790760 kernel: ... event mask: 000000000000003f Feb 8 23:28:05.790769 kernel: signal: max sigframe size: 1776 Feb 8 23:28:05.790781 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:28:05.790789 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:28:05.790807 kernel: x86: Booting SMP configuration: Feb 8 23:28:05.790816 kernel: .... node #0, CPUs: #1 Feb 8 23:28:05.790825 kernel: kvm-clock: cpu 1, msr 36faa041, secondary cpu clock Feb 8 23:28:05.790833 kernel: kvm-guest: setup async PF for cpu 1 Feb 8 23:28:05.790841 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 8 23:28:05.790850 kernel: #2 Feb 8 23:28:05.790859 kernel: kvm-clock: cpu 2, msr 36faa081, secondary cpu clock Feb 8 23:28:05.790867 kernel: kvm-guest: setup async PF for cpu 2 Feb 8 23:28:05.790887 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 8 23:28:05.790896 kernel: #3 Feb 8 23:28:05.790904 kernel: kvm-clock: cpu 3, msr 36faa0c1, secondary cpu clock Feb 8 23:28:05.790913 kernel: kvm-guest: setup async PF for cpu 3 Feb 8 23:28:05.790922 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 8 23:28:05.790931 kernel: smp: Brought up 1 node, 4 CPUs Feb 8 23:28:05.790949 kernel: smpboot: Max logical packages: 1 Feb 8 23:28:05.790958 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 8 23:28:05.790967 kernel: devtmpfs: initialized Feb 8 23:28:05.790994 kernel: x86/mm: Memory block size: 128MB Feb 8 23:28:05.791004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:28:05.791014 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 8 23:28:05.791023 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:28:05.791032 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:28:05.791050 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:28:05.791059 kernel: audit: type=2000 audit(1707434885.881:1): state=initialized audit_enabled=0 res=1 Feb 8 23:28:05.791068 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:28:05.791077 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:28:05.791096 kernel: cpuidle: using governor menu Feb 8 23:28:05.791105 kernel: ACPI: bus type PCI registered Feb 8 23:28:05.791115 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:28:05.791123 kernel: dca service started, version 1.12.1 Feb 8 23:28:05.791138 kernel: PCI: Using configuration type 1 for base access Feb 8 23:28:05.791150 kernel: PCI: Using configuration type 1 for extended access Feb 8 23:28:05.791159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:28:05.791168 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:28:05.791177 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:28:05.791197 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:28:05.791206 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:28:05.791215 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:28:05.791230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:28:05.791243 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:28:05.791252 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:28:05.791262 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:28:05.791280 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:28:05.791290 kernel: ACPI: Interpreter enabled Feb 8 23:28:05.791301 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:28:05.791310 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:28:05.791320 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:28:05.791338 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:28:05.791348 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:28:05.791522 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:28:05.791539 kernel: acpiphp: Slot [3] registered Feb 8 23:28:05.791548 kernel: acpiphp: Slot [4] registered Feb 8 23:28:05.791569 kernel: acpiphp: Slot [5] registered Feb 8 23:28:05.791578 kernel: acpiphp: Slot [6] registered Feb 8 23:28:05.791587 kernel: acpiphp: Slot [7] registered Feb 8 23:28:05.791596 kernel: acpiphp: Slot [8] registered Feb 8 23:28:05.791615 kernel: acpiphp: Slot [9] registered Feb 8 23:28:05.791624 kernel: acpiphp: Slot [10] registered Feb 8 23:28:05.791643 kernel: acpiphp: Slot [11] registered Feb 8 23:28:05.791653 kernel: acpiphp: Slot [12] registered Feb 8 23:28:05.791662 kernel: acpiphp: Slot [13] registered Feb 8 23:28:05.791671 kernel: acpiphp: Slot [14] registered Feb 8 23:28:05.791683 kernel: acpiphp: Slot [15] registered Feb 8 23:28:05.791692 kernel: acpiphp: Slot [16] registered Feb 8 23:28:05.791701 kernel: acpiphp: Slot [17] registered Feb 8 23:28:05.791710 kernel: acpiphp: Slot [18] registered Feb 8 23:28:05.791719 kernel: acpiphp: Slot [19] registered Feb 8 23:28:05.791728 kernel: acpiphp: Slot [20] registered Feb 8 23:28:05.791737 kernel: acpiphp: Slot [21] registered Feb 8 23:28:05.791746 kernel: acpiphp: Slot [22] registered Feb 8 23:28:05.791755 kernel: acpiphp: Slot [23] registered Feb 8 23:28:05.791766 kernel: acpiphp: Slot [24] registered Feb 8 23:28:05.791775 kernel: acpiphp: Slot [25] registered Feb 8 23:28:05.791784 kernel: acpiphp: Slot [26] registered Feb 8 23:28:05.791793 kernel: acpiphp: Slot [27] registered Feb 8 23:28:05.791812 kernel: acpiphp: Slot [28] registered Feb 8 23:28:05.791821 kernel: acpiphp: Slot [29] registered Feb 8 23:28:05.791830 kernel: acpiphp: Slot [30] registered Feb 8 23:28:05.791840 kernel: acpiphp: Slot [31] registered Feb 8 23:28:05.791857 kernel: PCI host bridge to bus 0000:00 Feb 8 23:28:05.791958 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:28:05.792051 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:28:05.792128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:28:05.792205 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 8 23:28:05.792287 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:28:05.792408 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:28:05.792536 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:28:05.792678 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:28:05.792804 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:28:05.792899 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 8 23:28:05.793005 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:28:05.793099 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:28:05.793192 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:28:05.793283 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:28:05.793386 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:28:05.793477 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:28:05.793565 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:28:05.793675 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 8 23:28:05.793771 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 8 23:28:05.793860 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 8 23:28:05.793952 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 8 23:28:05.794051 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:28:05.794212 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:28:05.794311 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 8 23:28:05.794406 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 8 23:28:05.794499 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 8 23:28:05.794600 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:28:05.794723 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:28:05.794814 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 8 23:28:05.794902 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 8 23:28:05.795008 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:28:05.795098 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 8 23:28:05.795209 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 8 23:28:05.795302 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 8 23:28:05.795382 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 8 23:28:05.795393 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:28:05.795401 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:28:05.795410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:28:05.795418 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:28:05.795427 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:28:05.795435 kernel: iommu: Default domain type: Translated Feb 8 23:28:05.795443 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:28:05.795576 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:28:05.795695 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:28:05.795789 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:28:05.795802 kernel: vgaarb: loaded Feb 8 23:28:05.795812 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:28:05.795822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:28:05.795832 kernel: PTP clock support registered Feb 8 23:28:05.795842 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:28:05.795852 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:28:05.795864 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:28:05.795874 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 8 23:28:05.795883 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 8 23:28:05.795893 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 8 23:28:05.795903 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:28:05.795912 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:28:05.795922 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:28:05.795932 kernel: pnp: PnP ACPI init Feb 8 23:28:05.796050 kernel: pnp 00:02: [dma 2] Feb 8 23:28:05.796068 kernel: pnp: PnP ACPI: found 6 devices Feb 8 23:28:05.796078 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:28:05.796088 kernel: NET: Registered PF_INET protocol family Feb 8 23:28:05.796098 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:28:05.796108 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 8 23:28:05.796118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:28:05.796128 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:28:05.796138 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 8 23:28:05.796149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 8 23:28:05.796161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:28:05.796171 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:28:05.796181 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:28:05.796191 kernel: NET: Registered PF_XDP protocol family Feb 8 23:28:05.796274 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:28:05.796357 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:28:05.796436 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:28:05.796517 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 8 23:28:05.796602 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:28:05.796711 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:28:05.796805 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:28:05.796898 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:28:05.796911 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:28:05.796922 kernel: Initialise system trusted keyrings Feb 8 23:28:05.796932 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 8 23:28:05.796942 kernel: Key type asymmetric registered Feb 8 23:28:05.796954 kernel: Asymmetric key parser 'x509' registered Feb 8 23:28:05.796964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:28:05.796982 kernel: io scheduler mq-deadline registered Feb 8 23:28:05.796992 kernel: io scheduler kyber registered Feb 8 23:28:05.797001 kernel: io scheduler bfq registered Feb 8 23:28:05.797011 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:28:05.797022 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:28:05.797032 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 8 23:28:05.797042 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:28:05.797054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:28:05.797064 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:28:05.797074 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:28:05.797084 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:28:05.797094 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:28:05.797105 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:28:05.797200 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 8 23:28:05.797294 kernel: rtc_cmos 00:05: registered as rtc0 Feb 8 23:28:05.797383 kernel: rtc_cmos 00:05: setting system clock to 2024-02-08T23:28:05 UTC (1707434885) Feb 8 23:28:05.797469 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 8 23:28:05.797482 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:28:05.797492 kernel: Segment Routing with IPv6 Feb 8 23:28:05.797502 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:28:05.797512 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:28:05.797522 kernel: Key type dns_resolver registered Feb 8 23:28:05.797532 kernel: IPI shorthand broadcast: enabled Feb 8 23:28:05.797542 kernel: sched_clock: Marking stable (450246718, 69554855)->(525497718, -5696145) Feb 8 23:28:05.797554 kernel: registered taskstats version 1 Feb 8 23:28:05.797564 kernel: Loading compiled-in X.509 certificates Feb 8 23:28:05.797574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:28:05.797584 kernel: Key type .fscrypt registered Feb 8 23:28:05.797594 kernel: Key type fscrypt-provisioning registered Feb 8 23:28:05.797604 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:28:05.797614 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:28:05.797623 kernel: ima: No architecture policies found Feb 8 23:28:05.797644 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:28:05.797657 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:28:05.797666 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:28:05.797676 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:28:05.797686 kernel: Run /init as init process Feb 8 23:28:05.797696 kernel: with arguments: Feb 8 23:28:05.797705 kernel: /init Feb 8 23:28:05.797715 kernel: with environment: Feb 8 23:28:05.797737 kernel: HOME=/ Feb 8 23:28:05.797748 kernel: TERM=linux Feb 8 23:28:05.797760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:28:05.797774 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:28:05.797787 systemd[1]: Detected virtualization kvm. Feb 8 23:28:05.797798 systemd[1]: Detected architecture x86-64. Feb 8 23:28:05.797809 systemd[1]: Running in initrd. Feb 8 23:28:05.797820 systemd[1]: No hostname configured, using default hostname. Feb 8 23:28:05.797830 systemd[1]: Hostname set to . Feb 8 23:28:05.797844 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:28:05.797854 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:28:05.797865 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:28:05.797876 systemd[1]: Reached target cryptsetup.target. Feb 8 23:28:05.797887 systemd[1]: Reached target paths.target. Feb 8 23:28:05.797898 systemd[1]: Reached target slices.target. Feb 8 23:28:05.797908 systemd[1]: Reached target swap.target. Feb 8 23:28:05.797919 systemd[1]: Reached target timers.target. Feb 8 23:28:05.797932 systemd[1]: Listening on iscsid.socket. Feb 8 23:28:05.797943 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:28:05.797954 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:28:05.797965 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:28:05.797983 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:28:05.797994 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:28:05.798005 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:28:05.798016 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:28:05.798029 systemd[1]: Reached target sockets.target. Feb 8 23:28:05.798040 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:28:05.798051 systemd[1]: Finished network-cleanup.service. Feb 8 23:28:05.798061 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:28:05.798072 systemd[1]: Starting systemd-journald.service... Feb 8 23:28:05.798084 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:28:05.798096 systemd[1]: Starting systemd-resolved.service... Feb 8 23:28:05.798107 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:28:05.798118 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:28:05.798129 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:28:05.798140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:28:05.798150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:28:05.798165 systemd-journald[198]: Journal started Feb 8 23:28:05.798219 systemd-journald[198]: Runtime Journal (/run/log/journal/e86816e364da4319859c4fcbb486f85a) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:28:05.792619 systemd-modules-load[199]: Inserted module 'overlay' Feb 8 23:28:05.823395 systemd[1]: Started systemd-journald.service. Feb 8 23:28:05.823428 kernel: audit: type=1130 audit(1707434885.810:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.823446 kernel: audit: type=1130 audit(1707434885.810:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.823458 kernel: audit: type=1130 audit(1707434885.815:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.823470 kernel: audit: type=1130 audit(1707434885.818:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.799211 systemd-resolved[200]: Positive Trust Anchors: Feb 8 23:28:05.825961 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:28:05.799218 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:28:05.799244 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:28:05.801407 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 8 23:28:05.832856 kernel: Bridge firewalling registered Feb 8 23:28:05.811624 systemd[1]: Started systemd-resolved.service. Feb 8 23:28:05.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.816158 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:28:05.838189 kernel: audit: type=1130 audit(1707434885.834:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.818510 systemd[1]: Reached target nss-lookup.target. Feb 8 23:28:05.821582 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:28:05.832097 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 8 23:28:05.834038 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:28:05.835723 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:28:05.848842 dracut-cmdline[216]: dracut-dracut-053 Feb 8 23:28:05.849940 kernel: SCSI subsystem initialized Feb 8 23:28:05.850962 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:05.861799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:28:05.861843 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:28:05.861853 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:28:05.865333 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 8 23:28:05.866079 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:28:05.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.869669 kernel: audit: type=1130 audit(1707434885.866:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.868881 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:28:05.874996 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:28:05.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.878651 kernel: audit: type=1130 audit(1707434885.874:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.910662 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:28:05.920661 kernel: iscsi: registered transport (tcp) Feb 8 23:28:05.939657 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:28:05.939697 kernel: QLogic iSCSI HBA Driver Feb 8 23:28:05.967739 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:28:05.970949 kernel: audit: type=1130 audit(1707434885.967:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:05.970999 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:28:06.015659 kernel: raid6: avx2x4 gen() 28334 MB/s Feb 8 23:28:06.032649 kernel: raid6: avx2x4 xor() 7058 MB/s Feb 8 23:28:06.049646 kernel: raid6: avx2x2 gen() 31146 MB/s Feb 8 23:28:06.066646 kernel: raid6: avx2x2 xor() 19329 MB/s Feb 8 23:28:06.083642 kernel: raid6: avx2x1 gen() 26367 MB/s Feb 8 23:28:06.100648 kernel: raid6: avx2x1 xor() 15353 MB/s Feb 8 23:28:06.117647 kernel: raid6: sse2x4 gen() 14789 MB/s Feb 8 23:28:06.134650 kernel: raid6: sse2x4 xor() 6827 MB/s Feb 8 23:28:06.151647 kernel: raid6: sse2x2 gen() 16146 MB/s Feb 8 23:28:06.168646 kernel: raid6: sse2x2 xor() 9859 MB/s Feb 8 23:28:06.185647 kernel: raid6: sse2x1 gen() 11961 MB/s Feb 8 23:28:06.203077 kernel: raid6: sse2x1 xor() 7816 MB/s Feb 8 23:28:06.203089 kernel: raid6: using algorithm avx2x2 gen() 31146 MB/s Feb 8 23:28:06.203098 kernel: raid6: .... xor() 19329 MB/s, rmw enabled Feb 8 23:28:06.203106 kernel: raid6: using avx2x2 recovery algorithm Feb 8 23:28:06.214646 kernel: xor: automatically using best checksumming function avx Feb 8 23:28:06.301657 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:28:06.309212 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:28:06.312413 kernel: audit: type=1130 audit(1707434886.308:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:06.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:06.311000 audit: BPF prog-id=7 op=LOAD Feb 8 23:28:06.311000 audit: BPF prog-id=8 op=LOAD Feb 8 23:28:06.312818 systemd[1]: Starting systemd-udevd.service... Feb 8 23:28:06.324755 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 8 23:28:06.329117 systemd[1]: Started systemd-udevd.service. Feb 8 23:28:06.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:06.330349 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:28:06.340726 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 8 23:28:06.363508 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:28:06.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:06.364835 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:28:06.399725 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:28:06.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:06.420655 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 8 23:28:06.427073 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:28:06.427094 kernel: GPT:9289727 != 19775487 Feb 8 23:28:06.427104 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:28:06.427117 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:28:06.427127 kernel: GPT:9289727 != 19775487 Feb 8 23:28:06.427137 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:28:06.427145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:06.439659 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:28:06.440653 kernel: AES CTR mode by8 optimization enabled Feb 8 23:28:06.441649 kernel: libata version 3.00 loaded. Feb 8 23:28:06.444773 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:28:06.445973 kernel: scsi host0: ata_piix Feb 8 23:28:06.446091 kernel: scsi host1: ata_piix Feb 8 23:28:06.446179 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 8 23:28:06.446188 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 8 23:28:06.451656 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Feb 8 23:28:06.457248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:28:06.482232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:28:06.485868 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:28:06.487898 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:28:06.495566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:28:06.497571 systemd[1]: Starting disk-uuid.service... Feb 8 23:28:06.504438 disk-uuid[523]: Primary Header is updated. Feb 8 23:28:06.504438 disk-uuid[523]: Secondary Entries is updated. Feb 8 23:28:06.504438 disk-uuid[523]: Secondary Header is updated. Feb 8 23:28:06.507653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:06.602663 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 8 23:28:06.602712 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 8 23:28:06.632646 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 8 23:28:06.632791 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:28:06.649648 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 8 23:28:07.513228 disk-uuid[524]: The operation has completed successfully. Feb 8 23:28:07.515346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:07.537298 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:28:07.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.537376 systemd[1]: Finished disk-uuid.service. Feb 8 23:28:07.541355 systemd[1]: Starting verity-setup.service... Feb 8 23:28:07.552656 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 8 23:28:07.569942 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:28:07.571587 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:28:07.573675 systemd[1]: Finished verity-setup.service. Feb 8 23:28:07.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.627661 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:28:07.627974 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:28:07.628136 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:28:07.629033 systemd[1]: Starting ignition-setup.service... Feb 8 23:28:07.629746 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:28:07.639276 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:28:07.639313 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:28:07.639327 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:28:07.645946 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:28:07.652727 systemd[1]: Finished ignition-setup.service. Feb 8 23:28:07.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.653523 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:28:07.686354 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:28:07.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.688240 systemd[1]: Starting systemd-networkd.service... Feb 8 23:28:07.687000 audit: BPF prog-id=9 op=LOAD Feb 8 23:28:07.688595 ignition[647]: Ignition 2.14.0 Feb 8 23:28:07.688605 ignition[647]: Stage: fetch-offline Feb 8 23:28:07.688684 ignition[647]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:07.688697 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:07.688820 ignition[647]: parsed url from cmdline: "" Feb 8 23:28:07.688824 ignition[647]: no config URL provided Feb 8 23:28:07.688831 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:28:07.688841 ignition[647]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:28:07.688862 ignition[647]: op(1): [started] loading QEMU firmware config module Feb 8 23:28:07.688869 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 8 23:28:07.692119 ignition[647]: op(1): [finished] loading QEMU firmware config module Feb 8 23:28:07.744860 systemd-networkd[717]: lo: Link UP Feb 8 23:28:07.744871 systemd-networkd[717]: lo: Gained carrier Feb 8 23:28:07.745393 systemd-networkd[717]: Enumeration completed Feb 8 23:28:07.745457 systemd[1]: Started systemd-networkd.service. Feb 8 23:28:07.745646 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:28:07.746532 systemd-networkd[717]: eth0: Link UP Feb 8 23:28:07.746537 systemd-networkd[717]: eth0: Gained carrier Feb 8 23:28:07.751421 systemd[1]: Reached target network.target. Feb 8 23:28:07.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.752681 systemd[1]: Starting iscsiuio.service... Feb 8 23:28:07.757827 systemd[1]: Started iscsiuio.service. Feb 8 23:28:07.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.759334 systemd[1]: Starting iscsid.service... Feb 8 23:28:07.761918 ignition[647]: parsing config with SHA512: 7c935dda5d5c603685b96a73534352fe53bcb6e5711568191c4dd84451dbc2f7d8b3f3a8bf0700244a3b74e8b17d89f70fc46f8b7b4ad52bc3db52508ada6e4d Feb 8 23:28:07.762987 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:28:07.762987 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:28:07.762987 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:28:07.762987 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:28:07.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.772169 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:28:07.772169 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:28:07.764719 systemd[1]: Started iscsid.service. Feb 8 23:28:07.769982 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:28:07.777721 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:28:07.781083 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:28:07.782266 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:28:07.783457 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:28:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.784652 systemd[1]: Reached target remote-fs.target. Feb 8 23:28:07.786723 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:28:07.796003 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:28:07.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.810782 unknown[647]: fetched base config from "system" Feb 8 23:28:07.810794 unknown[647]: fetched user config from "qemu" Feb 8 23:28:07.812539 ignition[647]: fetch-offline: fetch-offline passed Feb 8 23:28:07.813223 ignition[647]: Ignition finished successfully Feb 8 23:28:07.814590 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:28:07.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.816029 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 8 23:28:07.818046 systemd[1]: Starting ignition-kargs.service... Feb 8 23:28:07.825620 ignition[740]: Ignition 2.14.0 Feb 8 23:28:07.825641 ignition[740]: Stage: kargs Feb 8 23:28:07.825723 ignition[740]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:07.825730 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:07.828747 ignition[740]: kargs: kargs passed Feb 8 23:28:07.829191 ignition[740]: Ignition finished successfully Feb 8 23:28:07.830555 systemd[1]: Finished ignition-kargs.service. Feb 8 23:28:07.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.832308 systemd[1]: Starting ignition-disks.service... Feb 8 23:28:07.838422 ignition[746]: Ignition 2.14.0 Feb 8 23:28:07.838431 ignition[746]: Stage: disks Feb 8 23:28:07.838516 ignition[746]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:07.838523 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:07.841590 ignition[746]: disks: disks passed Feb 8 23:28:07.842047 ignition[746]: Ignition finished successfully Feb 8 23:28:07.843108 systemd[1]: Finished ignition-disks.service. Feb 8 23:28:07.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.844216 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:28:07.844286 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:28:07.846115 systemd[1]: Reached target local-fs.target. Feb 8 23:28:07.847095 systemd[1]: Reached target sysinit.target. Feb 8 23:28:07.848133 systemd[1]: Reached target basic.target. Feb 8 23:28:07.849993 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:28:07.859004 systemd-fsck[754]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 8 23:28:07.863383 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:28:07.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.863949 systemd[1]: Mounting sysroot.mount... Feb 8 23:28:07.868650 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:28:07.869084 systemd[1]: Mounted sysroot.mount. Feb 8 23:28:07.870156 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:28:07.872092 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:28:07.873410 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:28:07.873454 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:28:07.874524 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:28:07.877377 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:28:07.878967 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:28:07.882493 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:28:07.885503 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:28:07.887553 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:28:07.890421 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:28:07.909719 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:28:07.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.911061 systemd[1]: Starting ignition-mount.service... Feb 8 23:28:07.912238 systemd[1]: Starting sysroot-boot.service... Feb 8 23:28:07.917686 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Feb 8 23:28:07.925684 ignition[807]: INFO : Ignition 2.14.0 Feb 8 23:28:07.925684 ignition[807]: INFO : Stage: mount Feb 8 23:28:07.927414 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:07.927414 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:07.927414 ignition[807]: INFO : mount: mount passed Feb 8 23:28:07.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:07.931181 ignition[807]: INFO : Ignition finished successfully Feb 8 23:28:07.928203 systemd[1]: Finished ignition-mount.service. Feb 8 23:28:07.929109 systemd[1]: Finished sysroot-boot.service. Feb 8 23:28:08.579850 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:28:08.585330 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Feb 8 23:28:08.585359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:28:08.585369 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:28:08.586642 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:28:08.588857 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:28:08.590482 systemd[1]: Starting ignition-files.service... Feb 8 23:28:08.602732 ignition[835]: INFO : Ignition 2.14.0 Feb 8 23:28:08.602732 ignition[835]: INFO : Stage: files Feb 8 23:28:08.603942 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:08.603942 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:08.606409 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:28:08.607317 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:28:08.607317 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:28:08.609518 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:28:08.609518 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:28:08.609518 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:28:08.609518 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:28:08.609518 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:08.608975 unknown[835]: wrote ssh authorized keys file for user: core Feb 8 23:28:08.635406 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:28:08.707315 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:28:08.708863 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:28:08.708863 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:28:09.069660 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:28:09.313358 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:28:09.315582 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:28:09.315582 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:28:09.315582 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:09.615766 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:28:09.792819 systemd-networkd[717]: eth0: Gained IPv6LL Feb 8 23:28:09.853177 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:28:09.855553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:28:09.855553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:28:09.855553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 8 23:28:09.855553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:28:09.855553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:28:09.917959 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:28:10.140592 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:28:10.142856 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:28:10.142856 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:28:10.142856 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:28:10.188744 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:28:10.444265 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:28:10.446457 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:28:10.446457 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:28:10.446457 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:28:10.489525 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:28:11.133135 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:28:11.135534 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:28:11.135534 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:28:11.135534 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:28:11.135534 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:28:11.135534 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:11.434356 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 8 23:28:11.539285 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:28:11.540868 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:28:11.540868 ignition[835]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(19): [started] processing unit "coreos-metadata.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(19): op(1a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(19): op(1a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(19): [finished] processing unit "coreos-metadata.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:28:11.563970 ignition[835]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:28:11.597974 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 8 23:28:11.597996 kernel: audit: type=1130 audit(1707434891.568:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.598008 kernel: audit: type=1130 audit(1707434891.577:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.598019 kernel: audit: type=1130 audit(1707434891.580:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.598028 kernel: audit: type=1131 audit(1707434891.580:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.598162 ignition[835]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:28:11.598162 ignition[835]: INFO : files: op(1e): [started] setting preset to disabled for "coreos-metadata.service" Feb 8 23:28:11.598162 ignition[835]: INFO : files: op(1e): op(1f): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:28:11.598162 ignition[835]: INFO : files: op(1e): op(1f): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:28:11.598162 ignition[835]: INFO : files: op(1e): [finished] setting preset to disabled for "coreos-metadata.service" Feb 8 23:28:11.598162 ignition[835]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:28:11.598162 ignition[835]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:28:11.598162 ignition[835]: INFO : files: files passed Feb 8 23:28:11.598162 ignition[835]: INFO : Ignition finished successfully Feb 8 23:28:11.613149 kernel: audit: type=1130 audit(1707434891.601:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.613162 kernel: audit: type=1131 audit(1707434891.601:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.566923 systemd[1]: Finished ignition-files.service. Feb 8 23:28:11.569265 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:28:11.614744 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 8 23:28:11.572957 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:28:11.617194 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:28:11.573579 systemd[1]: Starting ignition-quench.service... Feb 8 23:28:11.575890 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:28:11.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.577434 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:28:11.623906 kernel: audit: type=1130 audit(1707434891.619:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.577496 systemd[1]: Finished ignition-quench.service. Feb 8 23:28:11.581440 systemd[1]: Reached target ignition-complete.target. Feb 8 23:28:11.587844 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:28:11.600049 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:28:11.600129 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:28:11.601847 systemd[1]: Reached target initrd-fs.target. Feb 8 23:28:11.607741 systemd[1]: Reached target initrd.target. Feb 8 23:28:11.609263 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:28:11.609852 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:28:11.619296 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:28:11.620890 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:28:11.629163 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:28:11.636384 kernel: audit: type=1131 audit(1707434891.632:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.629849 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:28:11.631037 systemd[1]: Stopped target timers.target. Feb 8 23:28:11.632106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:28:11.632210 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:28:11.633365 systemd[1]: Stopped target initrd.target. Feb 8 23:28:11.636456 systemd[1]: Stopped target basic.target. Feb 8 23:28:11.637602 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:28:11.638852 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:28:11.640137 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:28:11.641374 systemd[1]: Stopped target remote-fs.target. Feb 8 23:28:11.642492 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:28:11.643602 systemd[1]: Stopped target sysinit.target. Feb 8 23:28:11.653421 kernel: audit: type=1131 audit(1707434891.649:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.644955 systemd[1]: Stopped target local-fs.target. Feb 8 23:28:11.646452 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:28:11.657696 kernel: audit: type=1131 audit(1707434891.654:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.648010 systemd[1]: Stopped target swap.target. Feb 8 23:28:11.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.649198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:28:11.649299 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:28:11.650494 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:28:11.653479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:28:11.653568 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:28:11.654711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:28:11.654797 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:28:11.657961 systemd[1]: Stopped target paths.target. Feb 8 23:28:11.658752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:28:11.662684 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:28:11.663820 systemd[1]: Stopped target slices.target. Feb 8 23:28:11.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.664858 systemd[1]: Stopped target sockets.target. Feb 8 23:28:11.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.666136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:28:11.666244 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:28:11.671182 iscsid[725]: iscsid shutting down. Feb 8 23:28:11.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.667386 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:28:11.667467 systemd[1]: Stopped ignition-files.service. Feb 8 23:28:11.669122 systemd[1]: Stopping ignition-mount.service... Feb 8 23:28:11.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.670303 systemd[1]: Stopping iscsid.service... Feb 8 23:28:11.680016 ignition[875]: INFO : Ignition 2.14.0 Feb 8 23:28:11.680016 ignition[875]: INFO : Stage: umount Feb 8 23:28:11.680016 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:28:11.680016 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:28:11.680016 ignition[875]: INFO : umount: umount passed Feb 8 23:28:11.680016 ignition[875]: INFO : Ignition finished successfully Feb 8 23:28:11.671168 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:28:11.671280 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:28:11.672664 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:28:11.673459 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:28:11.673582 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:28:11.674740 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:28:11.674823 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:28:11.676800 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:28:11.676878 systemd[1]: Stopped iscsid.service. Feb 8 23:28:11.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.678213 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:28:11.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.678273 systemd[1]: Closed iscsid.socket. Feb 8 23:28:11.685699 systemd[1]: Stopping iscsiuio.service... Feb 8 23:28:11.688229 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:28:11.688735 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:28:11.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.688808 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:28:11.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.689334 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:28:11.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.689395 systemd[1]: Stopped ignition-mount.service. Feb 8 23:28:11.690771 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:28:11.690849 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:28:11.692439 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:28:11.692474 systemd[1]: Stopped ignition-disks.service. Feb 8 23:28:11.692559 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:28:11.692586 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:28:11.694026 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:28:11.694053 systemd[1]: Stopped ignition-setup.service. Feb 8 23:28:11.695162 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:28:11.695190 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:28:11.703759 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:28:11.703848 systemd[1]: Stopped iscsiuio.service. Feb 8 23:28:11.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.705332 systemd[1]: Stopped target network.target. Feb 8 23:28:11.705395 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:28:11.705419 systemd[1]: Closed iscsiuio.socket. Feb 8 23:28:11.706854 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:28:11.707987 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:28:11.714685 systemd-networkd[717]: eth0: DHCPv6 lease lost Feb 8 23:28:11.715790 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:28:11.716454 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:28:11.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.717762 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:28:11.717789 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:28:11.719948 systemd[1]: Stopping network-cleanup.service... Feb 8 23:28:11.720941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:28:11.721705 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:28:11.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.722883 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:28:11.722914 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:28:11.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.724750 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:28:11.724839 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:28:11.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.725000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:28:11.726623 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:28:11.728430 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:28:11.729560 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:28:11.730250 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:28:11.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.733678 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:28:11.734340 systemd[1]: Stopped network-cleanup.service. Feb 8 23:28:11.734000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:28:11.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.735704 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:28:11.736406 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:28:11.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.737774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:28:11.737807 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:28:11.739488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:28:11.739517 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:28:11.741250 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:28:11.741283 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:28:11.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.742923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:28:11.742952 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:28:11.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.744563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:28:11.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.744591 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:28:11.746872 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:28:11.748092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:28:11.748127 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:28:11.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.751376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:28:11.752120 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:28:11.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:11.753363 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:28:11.755045 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:28:11.759637 systemd[1]: Switching root. Feb 8 23:28:11.762000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:28:11.762000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:28:11.763000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:28:11.763000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:28:11.763000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:28:11.777860 systemd-journald[198]: Journal stopped Feb 8 23:28:15.247604 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 8 23:28:15.247667 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:28:15.247680 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:28:15.247690 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:28:15.247700 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:28:15.247711 kernel: SELinux: policy capability open_perms=1 Feb 8 23:28:15.247720 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:28:15.247730 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:28:15.247742 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:28:15.247760 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:28:15.247770 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:28:15.247780 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:28:15.247790 systemd[1]: Successfully loaded SELinux policy in 36.574ms. Feb 8 23:28:15.247810 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.539ms. Feb 8 23:28:15.247821 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:28:15.247831 systemd[1]: Detected virtualization kvm. Feb 8 23:28:15.247841 systemd[1]: Detected architecture x86-64. Feb 8 23:28:15.247851 systemd[1]: Detected first boot. Feb 8 23:28:15.247865 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:28:15.247876 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:28:15.247886 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:28:15.247898 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:28:15.247909 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:28:15.247921 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:28:15.247934 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:28:15.247944 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:28:15.247955 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:28:15.247965 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:28:15.247976 systemd[1]: Created slice system-getty.slice. Feb 8 23:28:15.247987 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:28:15.247998 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:28:15.248008 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:28:15.248019 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:28:15.248029 systemd[1]: Created slice user.slice. Feb 8 23:28:15.248039 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:28:15.248050 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:28:15.248061 systemd[1]: Set up automount boot.automount. Feb 8 23:28:15.248071 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:28:15.248082 systemd[1]: Reached target integritysetup.target. Feb 8 23:28:15.248093 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:28:15.248103 systemd[1]: Reached target remote-fs.target. Feb 8 23:28:15.248114 systemd[1]: Reached target slices.target. Feb 8 23:28:15.248124 systemd[1]: Reached target swap.target. Feb 8 23:28:15.248134 systemd[1]: Reached target torcx.target. Feb 8 23:28:15.248147 systemd[1]: Reached target veritysetup.target. Feb 8 23:28:15.248157 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:28:15.248169 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:28:15.248179 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:28:15.248189 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:28:15.248199 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:28:15.248209 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:28:15.248220 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:28:15.248230 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:28:15.248241 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:28:15.248251 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:28:15.248261 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:28:15.248273 systemd[1]: Mounting media.mount... Feb 8 23:28:15.248283 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:28:15.248293 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:28:15.248303 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:28:15.248313 systemd[1]: Mounting tmp.mount... Feb 8 23:28:15.248323 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:28:15.248333 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:28:15.248343 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:28:15.248353 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:28:15.248366 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:28:15.248379 systemd[1]: Starting modprobe@drm.service... Feb 8 23:28:15.248389 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:28:15.248399 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:28:15.248409 systemd[1]: Starting modprobe@loop.service... Feb 8 23:28:15.248419 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:28:15.248430 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 8 23:28:15.248441 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 8 23:28:15.248453 systemd[1]: Starting systemd-journald.service... Feb 8 23:28:15.248463 kernel: loop: module loaded Feb 8 23:28:15.248473 kernel: fuse: init (API version 7.34) Feb 8 23:28:15.248483 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:28:15.248493 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:28:15.248503 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:28:15.248514 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:28:15.248524 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:28:15.248534 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:28:15.248544 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:28:15.248555 systemd[1]: Mounted media.mount. Feb 8 23:28:15.248566 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:28:15.248576 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:28:15.248588 systemd[1]: Mounted tmp.mount. Feb 8 23:28:15.248598 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:28:15.248611 systemd-journald[1023]: Journal started Feb 8 23:28:15.248672 systemd-journald[1023]: Runtime Journal (/run/log/journal/e86816e364da4319859c4fcbb486f85a) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:28:15.164000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:28:15.164000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 8 23:28:15.245000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:28:15.245000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdd696e570 a2=4000 a3=7ffdd696e60c items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:15.245000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:28:15.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.250910 systemd[1]: Started systemd-journald.service. Feb 8 23:28:15.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.252059 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:28:15.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.252842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:28:15.252993 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:28:15.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.253887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:28:15.254037 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:28:15.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.254819 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:28:15.254951 systemd[1]: Finished modprobe@drm.service. Feb 8 23:28:15.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.255717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:28:15.255898 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:28:15.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.256764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:28:15.256907 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:28:15.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.257770 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:28:15.257941 systemd[1]: Finished modprobe@loop.service. Feb 8 23:28:15.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.258868 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:28:15.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.259813 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:28:15.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.260937 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:28:15.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.261858 systemd[1]: Reached target network-pre.target. Feb 8 23:28:15.263494 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:28:15.264859 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:28:15.265450 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:28:15.267044 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:28:15.269190 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:28:15.269924 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:28:15.270995 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:28:15.271652 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:28:15.272596 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:28:15.276163 systemd-journald[1023]: Time spent on flushing to /var/log/journal/e86816e364da4319859c4fcbb486f85a is 18.691ms for 1067 entries. Feb 8 23:28:15.276163 systemd-journald[1023]: System Journal (/var/log/journal/e86816e364da4319859c4fcbb486f85a) is 8.0M, max 195.6M, 187.6M free. Feb 8 23:28:15.304733 systemd-journald[1023]: Received client request to flush runtime journal. Feb 8 23:28:15.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.281364 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:28:15.305944 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:28:15.286072 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:28:15.287222 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:28:15.288162 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:28:15.289079 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:28:15.289940 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:28:15.291838 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:28:15.305473 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:28:15.306375 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:28:15.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.312857 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:28:15.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.314499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:28:15.329611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:28:15.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.860203 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:28:15.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.862314 systemd[1]: Starting systemd-udevd.service... Feb 8 23:28:15.877828 systemd-udevd[1070]: Using default interface naming scheme 'v252'. Feb 8 23:28:15.889847 systemd[1]: Started systemd-udevd.service. Feb 8 23:28:15.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.892383 systemd[1]: Starting systemd-networkd.service... Feb 8 23:28:15.902260 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:28:15.915370 systemd[1]: Found device dev-ttyS0.device. Feb 8 23:28:15.947377 systemd[1]: Started systemd-userdbd.service. Feb 8 23:28:15.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:15.961666 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:28:15.964615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:28:15.977659 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:28:15.976000 audit[1078]: AVC avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:28:15.976000 audit[1078]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55df06fca590 a1=32194 a2=7f66e13f6bc5 a3=5 items=108 ppid=1070 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:15.976000 audit: CWD cwd="/" Feb 8 23:28:15.976000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=1 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=2 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=3 name=(null) inode=12858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=4 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=5 name=(null) inode=12859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=6 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=7 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=8 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=9 name=(null) inode=12861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=10 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=11 name=(null) inode=12862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=12 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=13 name=(null) inode=12863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=14 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=15 name=(null) inode=12864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=16 name=(null) inode=12860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=17 name=(null) inode=12865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=18 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=19 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=20 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=21 name=(null) inode=12867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=22 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=23 name=(null) inode=12868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=24 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=25 name=(null) inode=12869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=26 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=27 name=(null) inode=12870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=28 name=(null) inode=12866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=29 name=(null) inode=12871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=30 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=31 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=32 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=33 name=(null) inode=12873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=34 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=35 name=(null) inode=12874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=36 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=37 name=(null) inode=12875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=38 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=39 name=(null) inode=12876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=40 name=(null) inode=12872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=41 name=(null) inode=12877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=42 name=(null) inode=12857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=43 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=44 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=45 name=(null) inode=12879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=46 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=47 name=(null) inode=12880 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=48 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=49 name=(null) inode=12881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=50 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=51 name=(null) inode=12882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=52 name=(null) inode=12878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=53 name=(null) inode=12883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=55 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=56 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=57 name=(null) inode=12885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=58 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=59 name=(null) inode=12886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=60 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=61 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=62 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=63 name=(null) inode=12888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=64 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=65 name=(null) inode=12889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=66 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=67 name=(null) inode=12890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=68 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=69 name=(null) inode=12891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=70 name=(null) inode=12887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=71 name=(null) inode=12892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=72 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=73 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=74 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=75 name=(null) inode=12894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=76 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=77 name=(null) inode=12895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=78 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=79 name=(null) inode=12896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=80 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=81 name=(null) inode=12897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=82 name=(null) inode=12893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=83 name=(null) inode=12898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=84 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=85 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=86 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=87 name=(null) inode=12900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=88 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=89 name=(null) inode=12901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=90 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=91 name=(null) inode=12902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=92 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=93 name=(null) inode=12903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=94 name=(null) inode=12899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=95 name=(null) inode=12904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=96 name=(null) inode=12884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=97 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=98 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=99 name=(null) inode=12906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=100 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=101 name=(null) inode=12907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=102 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=103 name=(null) inode=12908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=104 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=105 name=(null) inode=12909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=106 name=(null) inode=12905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PATH item=107 name=(null) inode=12910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:15.976000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:28:15.990683 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:28:16.000650 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:28:16.013962 systemd-networkd[1077]: lo: Link UP Feb 8 23:28:16.013971 systemd-networkd[1077]: lo: Gained carrier Feb 8 23:28:16.014292 systemd-networkd[1077]: Enumeration completed Feb 8 23:28:16.014382 systemd-networkd[1077]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:28:16.014413 systemd[1]: Started systemd-networkd.service. Feb 8 23:28:16.015384 systemd-networkd[1077]: eth0: Link UP Feb 8 23:28:16.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.015391 systemd-networkd[1077]: eth0: Gained carrier Feb 8 23:28:16.019651 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:28:16.057800 systemd-networkd[1077]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:28:16.096809 kernel: kvm: Nested Virtualization enabled Feb 8 23:28:16.096894 kernel: SVM: kvm: Nested Paging enabled Feb 8 23:28:16.096915 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 8 23:28:16.097654 kernel: SVM: Virtual GIF supported Feb 8 23:28:16.109649 kernel: EDAC MC: Ver: 3.0.0 Feb 8 23:28:16.129003 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:28:16.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.130701 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:28:16.141603 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:28:16.169551 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:28:16.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.185329 systemd[1]: Reached target cryptsetup.target. Feb 8 23:28:16.187082 systemd[1]: Starting lvm2-activation.service... Feb 8 23:28:16.190685 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:28:16.215653 systemd[1]: Finished lvm2-activation.service. Feb 8 23:28:16.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.216568 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:28:16.217362 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:28:16.217380 systemd[1]: Reached target local-fs.target. Feb 8 23:28:16.218129 systemd[1]: Reached target machines.target. Feb 8 23:28:16.220287 systemd[1]: Starting ldconfig.service... Feb 8 23:28:16.221219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:28:16.221266 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:28:16.222275 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:28:16.223837 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:28:16.225689 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:28:16.226806 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:28:16.226845 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:28:16.227913 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:28:16.230755 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1111 (bootctl) Feb 8 23:28:16.231837 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:28:16.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.236256 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:28:16.244952 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:28:16.245597 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:28:16.247160 systemd-tmpfiles[1114]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:28:16.270178 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Feb 8 23:28:16.270178 systemd-fsck[1120]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:28:16.272160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:28:16.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.274895 systemd[1]: Mounting boot.mount... Feb 8 23:28:16.296461 systemd[1]: Mounted boot.mount. Feb 8 23:28:16.310237 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:28:16.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.352267 ldconfig[1110]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:28:16.357741 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:28:16.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.360159 systemd[1]: Starting audit-rules.service... Feb 8 23:28:16.361908 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:28:16.363641 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:28:16.365963 systemd[1]: Starting systemd-resolved.service... Feb 8 23:28:16.369693 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:28:16.372072 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:28:16.373287 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:28:16.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:16.374391 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:28:17.019156 kernel: kauditd_printk_skb: 200 callbacks suppressed Feb 8 23:28:17.019365 kernel: audit: type=1130 audit(1707434897.015:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.014374 systemd[1]: Finished ldconfig.service. Feb 8 23:28:17.025971 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:28:17.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.029720 kernel: audit: type=1130 audit(1707434897.026:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.028939 systemd[1]: Starting systemd-update-done.service... Feb 8 23:28:17.066000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.068731 systemd[1]: Finished systemd-update-done.service. Feb 8 23:28:17.070653 kernel: audit: type=1127 audit(1707434897.066:126): pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.070689 kernel: audit: type=1130 audit(1707434897.070:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.071615 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:28:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.075547 kernel: audit: type=1130 audit(1707434897.072:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.124140 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:28:17.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.125361 systemd[1]: Reached target time-set.target. Feb 8 23:28:17.126363 systemd-resolved[1132]: Positive Trust Anchors: Feb 8 23:28:17.126679 systemd-resolved[1132]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:28:17.126827 systemd-resolved[1132]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:28:17.957681 systemd-timesyncd[1133]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 8 23:28:17.957985 kernel: audit: type=1130 audit(1707434897.124:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:17.957749 systemd-timesyncd[1133]: Initial clock synchronization to Thu 2024-02-08 23:28:17.957549 UTC. Feb 8 23:28:17.958091 augenrules[1154]: No rules Feb 8 23:28:17.957000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:28:17.957000 audit[1154]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd972be510 a2=420 a3=0 items=0 ppid=1127 pid=1154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:17.963828 kernel: audit: type=1305 audit(1707434897.957:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:28:17.963886 kernel: audit: type=1300 audit(1707434897.957:130): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd972be510 a2=420 a3=0 items=0 ppid=1127 pid=1154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:17.963912 kernel: audit: type=1327 audit(1707434897.957:130): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:28:17.957000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:28:17.964129 systemd-resolved[1132]: Defaulting to hostname 'linux'. Feb 8 23:28:17.965535 systemd[1]: Started systemd-resolved.service. Feb 8 23:28:17.966506 systemd[1]: Finished audit-rules.service. Feb 8 23:28:17.967391 systemd[1]: Reached target network.target. Feb 8 23:28:17.968087 systemd[1]: Reached target nss-lookup.target. Feb 8 23:28:17.968962 systemd[1]: Reached target sysinit.target. Feb 8 23:28:17.970034 systemd[1]: Started motdgen.path. Feb 8 23:28:17.970809 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:28:17.972144 systemd[1]: Started logrotate.timer. Feb 8 23:28:17.972956 systemd[1]: Started mdadm.timer. Feb 8 23:28:17.973694 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:28:17.974598 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:28:17.974633 systemd[1]: Reached target paths.target. Feb 8 23:28:17.975479 systemd[1]: Reached target timers.target. Feb 8 23:28:17.976572 systemd[1]: Listening on dbus.socket. Feb 8 23:28:17.978382 systemd[1]: Starting docker.socket... Feb 8 23:28:17.980382 systemd[1]: Listening on sshd.socket. Feb 8 23:28:17.981640 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:28:17.981974 systemd[1]: Listening on docker.socket. Feb 8 23:28:17.983071 systemd[1]: Reached target sockets.target. Feb 8 23:28:17.983829 systemd[1]: Reached target basic.target. Feb 8 23:28:17.984758 systemd[1]: System is tainted: cgroupsv1 Feb 8 23:28:17.984793 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:28:17.984810 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:28:17.985606 systemd[1]: Starting containerd.service... Feb 8 23:28:17.987342 systemd[1]: Starting dbus.service... Feb 8 23:28:17.989180 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:28:17.991297 systemd[1]: Starting extend-filesystems.service... Feb 8 23:28:17.992275 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:28:17.993532 systemd[1]: Starting motdgen.service... Feb 8 23:28:17.995439 jq[1165]: false Feb 8 23:28:17.997269 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:28:18.000196 systemd[1]: Starting prepare-critools.service... Feb 8 23:28:18.002498 systemd[1]: Starting prepare-helm.service... Feb 8 23:28:18.004712 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:28:18.006197 dbus-daemon[1164]: [system] SELinux support is enabled Feb 8 23:28:18.010159 systemd[1]: Starting sshd-keygen.service... Feb 8 23:28:18.010854 extend-filesystems[1166]: Found sr0 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda1 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda2 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda3 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found usr Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda4 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda6 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda7 Feb 8 23:28:18.010854 extend-filesystems[1166]: Found vda9 Feb 8 23:28:18.010854 extend-filesystems[1166]: Checking size of /dev/vda9 Feb 8 23:28:18.017727 systemd[1]: Starting systemd-logind.service... Feb 8 23:28:18.021983 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:28:18.022083 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:28:18.023974 systemd[1]: Starting update-engine.service... Feb 8 23:28:18.026259 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:28:18.030752 jq[1192]: true Feb 8 23:28:18.029174 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:28:18.030313 systemd[1]: Started dbus.service. Feb 8 23:28:18.035753 extend-filesystems[1166]: Resized partition /dev/vda9 Feb 8 23:28:18.033987 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:28:18.040235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:28:18.040552 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:28:18.040975 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:28:18.043348 extend-filesystems[1199]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:28:18.043562 systemd[1]: Finished motdgen.service. Feb 8 23:28:18.049548 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:28:18.055203 tar[1202]: ./ Feb 8 23:28:18.055203 tar[1202]: ./macvlan Feb 8 23:28:18.057395 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:28:18.097736 jq[1212]: true Feb 8 23:28:18.100851 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 8 23:28:18.115985 tar[1203]: crictl Feb 8 23:28:18.114947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:28:18.116439 tar[1202]: ./static Feb 8 23:28:18.116465 tar[1204]: linux-amd64/helm Feb 8 23:28:18.114978 systemd[1]: Reached target system-config.target. Feb 8 23:28:18.116284 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:28:18.116296 systemd[1]: Reached target user-config.target. Feb 8 23:28:18.125152 systemd-logind[1188]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:28:18.125181 systemd-logind[1188]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:28:18.128044 systemd-logind[1188]: New seat seat0. Feb 8 23:28:18.151241 systemd[1]: Started systemd-logind.service. Feb 8 23:28:18.180995 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 8 23:28:18.198091 update_engine[1190]: I0208 23:28:18.182883 1190 main.cc:92] Flatcar Update Engine starting Feb 8 23:28:18.198091 update_engine[1190]: I0208 23:28:18.185431 1190 update_check_scheduler.cc:74] Next update check in 8m18s Feb 8 23:28:18.185257 systemd[1]: Started update-engine.service. Feb 8 23:28:18.188141 systemd[1]: Started locksmithd.service. Feb 8 23:28:18.199172 extend-filesystems[1199]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:28:18.199172 extend-filesystems[1199]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 8 23:28:18.199172 extend-filesystems[1199]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 8 23:28:18.202016 extend-filesystems[1166]: Resized filesystem in /dev/vda9 Feb 8 23:28:18.201842 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:28:18.202062 systemd[1]: Finished extend-filesystems.service. Feb 8 23:28:18.208839 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:28:18.209442 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:28:18.223602 tar[1202]: ./vlan Feb 8 23:28:18.230716 env[1215]: time="2024-02-08T23:28:18.230666928Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:28:18.253138 tar[1202]: ./portmap Feb 8 23:28:18.277106 env[1215]: time="2024-02-08T23:28:18.277045448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:28:18.277268 env[1215]: time="2024-02-08T23:28:18.277234632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289429 env[1215]: time="2024-02-08T23:28:18.289391186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289429 env[1215]: time="2024-02-08T23:28:18.289422054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289673 env[1215]: time="2024-02-08T23:28:18.289646745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289673 env[1215]: time="2024-02-08T23:28:18.289666622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289737 env[1215]: time="2024-02-08T23:28:18.289677934Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:28:18.289737 env[1215]: time="2024-02-08T23:28:18.289687892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289785 env[1215]: time="2024-02-08T23:28:18.289745320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.289957 env[1215]: time="2024-02-08T23:28:18.289934725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:28:18.290107 env[1215]: time="2024-02-08T23:28:18.290082182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:28:18.290107 env[1215]: time="2024-02-08T23:28:18.290101588Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:28:18.290179 env[1215]: time="2024-02-08T23:28:18.290143296Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:28:18.290179 env[1215]: time="2024-02-08T23:28:18.290152153Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:28:18.294933 env[1215]: time="2024-02-08T23:28:18.294907249Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:28:18.295004 env[1215]: time="2024-02-08T23:28:18.294934329Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:28:18.295004 env[1215]: time="2024-02-08T23:28:18.294947584Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:28:18.295063 env[1215]: time="2024-02-08T23:28:18.295006945Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295063 env[1215]: time="2024-02-08T23:28:18.295020411Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295136 env[1215]: time="2024-02-08T23:28:18.295097355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295136 env[1215]: time="2024-02-08T23:28:18.295135016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295196 env[1215]: time="2024-02-08T23:28:18.295147900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295196 env[1215]: time="2024-02-08T23:28:18.295160584Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295196 env[1215]: time="2024-02-08T23:28:18.295172005Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295196 env[1215]: time="2024-02-08T23:28:18.295183396Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295196 env[1215]: time="2024-02-08T23:28:18.295195058Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:28:18.295297 env[1215]: time="2024-02-08T23:28:18.295273846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:28:18.295367 env[1215]: time="2024-02-08T23:28:18.295343136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:28:18.295638 env[1215]: time="2024-02-08T23:28:18.295617069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:28:18.295686 env[1215]: time="2024-02-08T23:28:18.295644751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295686 env[1215]: time="2024-02-08T23:28:18.295657054Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:28:18.295744 env[1215]: time="2024-02-08T23:28:18.295694645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295744 env[1215]: time="2024-02-08T23:28:18.295714893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295744 env[1215]: time="2024-02-08T23:28:18.295730743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295744 env[1215]: time="2024-02-08T23:28:18.295740691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295828 env[1215]: time="2024-02-08T23:28:18.295751411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295828 env[1215]: time="2024-02-08T23:28:18.295763434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295828 env[1215]: time="2024-02-08T23:28:18.295773563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295828 env[1215]: time="2024-02-08T23:28:18.295783451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295828 env[1215]: time="2024-02-08T23:28:18.295794713Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:28:18.295927 env[1215]: time="2024-02-08T23:28:18.295902895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295927 env[1215]: time="2024-02-08T23:28:18.295916852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295983 env[1215]: time="2024-02-08T23:28:18.295927131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.295983 env[1215]: time="2024-02-08T23:28:18.295937811Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:28:18.295983 env[1215]: time="2024-02-08T23:28:18.295950234Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:28:18.295983 env[1215]: time="2024-02-08T23:28:18.295960543Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:28:18.296079 env[1215]: time="2024-02-08T23:28:18.295987875Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:28:18.296079 env[1215]: time="2024-02-08T23:28:18.296020075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:28:18.296238 env[1215]: time="2024-02-08T23:28:18.296187930Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:28:18.296238 env[1215]: time="2024-02-08T23:28:18.296239066Z" level=info msg="Connect containerd service" Feb 8 23:28:18.297240 env[1215]: time="2024-02-08T23:28:18.296267900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:28:18.297240 env[1215]: time="2024-02-08T23:28:18.297183927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301094860Z" level=info msg="Start subscribing containerd event" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301190750Z" level=info msg="Start recovering state" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301270039Z" level=info msg="Start event monitor" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301290718Z" level=info msg="Start snapshots syncer" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301301448Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301309463Z" level=info msg="Start streaming server" Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301576654Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301613583Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:28:18.303986 env[1215]: time="2024-02-08T23:28:18.301665831Z" level=info msg="containerd successfully booted in 0.071670s" Feb 8 23:28:18.304204 tar[1202]: ./host-local Feb 8 23:28:18.301781 systemd[1]: Started containerd.service. Feb 8 23:28:18.342303 tar[1202]: ./vrf Feb 8 23:28:18.374650 tar[1202]: ./bridge Feb 8 23:28:18.427359 tar[1202]: ./tuning Feb 8 23:28:18.473098 tar[1202]: ./firewall Feb 8 23:28:18.514984 systemd[1]: Created slice system-sshd.slice. Feb 8 23:28:18.537159 tar[1202]: ./host-device Feb 8 23:28:18.569596 tar[1202]: ./sbr Feb 8 23:28:18.604062 tar[1202]: ./loopback Feb 8 23:28:18.633016 tar[1202]: ./dhcp Feb 8 23:28:18.683941 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:28:18.686175 systemd-networkd[1077]: eth0: Gained IPv6LL Feb 8 23:28:18.709565 systemd[1]: Finished sshd-keygen.service. Feb 8 23:28:18.711613 systemd[1]: Starting issuegen.service... Feb 8 23:28:18.712995 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:58960.service. Feb 8 23:28:18.718416 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:28:18.718666 systemd[1]: Finished issuegen.service. Feb 8 23:28:18.722019 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:28:18.726269 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:28:18.732318 systemd[1]: Started getty@tty1.service. Feb 8 23:28:18.737848 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:28:18.738834 systemd[1]: Reached target getty.target. Feb 8 23:28:18.739840 tar[1204]: linux-amd64/LICENSE Feb 8 23:28:18.739927 tar[1204]: linux-amd64/README.md Feb 8 23:28:18.745411 systemd[1]: Finished prepare-helm.service. Feb 8 23:28:18.751798 tar[1202]: ./ptp Feb 8 23:28:18.753783 locksmithd[1237]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:28:18.768468 sshd[1259]: Accepted publickey for core from 10.0.0.1 port 58960 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:18.769865 sshd[1259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:18.777801 systemd[1]: Finished prepare-critools.service. Feb 8 23:28:18.779865 systemd[1]: Created slice user-500.slice. Feb 8 23:28:18.781670 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:28:18.784964 systemd-logind[1188]: New session 1 of user core. Feb 8 23:28:18.786368 tar[1202]: ./ipvlan Feb 8 23:28:18.788882 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:28:18.790665 systemd[1]: Starting user@500.service... Feb 8 23:28:18.793432 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:18.816085 tar[1202]: ./bandwidth Feb 8 23:28:18.853500 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:28:18.854607 systemd[1]: Reached target multi-user.target. Feb 8 23:28:18.856708 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:28:18.864067 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:28:18.864319 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:28:18.866433 systemd[1278]: Queued start job for default target default.target. Feb 8 23:28:18.866611 systemd[1278]: Reached target paths.target. Feb 8 23:28:18.866628 systemd[1278]: Reached target sockets.target. Feb 8 23:28:18.866640 systemd[1278]: Reached target timers.target. Feb 8 23:28:18.866652 systemd[1278]: Reached target basic.target. Feb 8 23:28:18.866678 systemd[1278]: Reached target default.target. Feb 8 23:28:18.866695 systemd[1278]: Startup finished in 67ms. Feb 8 23:28:18.867036 systemd[1]: Started user@500.service. Feb 8 23:28:18.868351 systemd[1]: Started session-1.scope. Feb 8 23:28:18.871057 systemd[1]: Startup finished in 6.758s (kernel) + 6.226s (userspace) = 12.985s. Feb 8 23:28:18.921112 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:58970.service. Feb 8 23:28:18.964565 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 58970 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:18.965841 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:18.969414 systemd-logind[1188]: New session 2 of user core. Feb 8 23:28:18.970174 systemd[1]: Started session-2.scope. Feb 8 23:28:19.023621 sshd[1294]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:19.026361 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:58982.service. Feb 8 23:28:19.026855 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:58970.service: Deactivated successfully. Feb 8 23:28:19.027989 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:28:19.028053 systemd-logind[1188]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:28:19.029103 systemd-logind[1188]: Removed session 2. Feb 8 23:28:19.068370 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 58982 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:19.069477 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:19.073397 systemd-logind[1188]: New session 3 of user core. Feb 8 23:28:19.074143 systemd[1]: Started session-3.scope. Feb 8 23:28:19.124280 sshd[1300]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:19.127002 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:58990.service. Feb 8 23:28:19.127543 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:58982.service: Deactivated successfully. Feb 8 23:28:19.128551 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:28:19.128555 systemd-logind[1188]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:28:19.129515 systemd-logind[1188]: Removed session 3. Feb 8 23:28:19.167168 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:19.168207 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:19.171404 systemd-logind[1188]: New session 4 of user core. Feb 8 23:28:19.172784 systemd[1]: Started session-4.scope. Feb 8 23:28:19.225105 sshd[1307]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:19.227758 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:59006.service. Feb 8 23:28:19.228204 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:58990.service: Deactivated successfully. Feb 8 23:28:19.228995 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:28:19.229128 systemd-logind[1188]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:28:19.230031 systemd-logind[1188]: Removed session 4. Feb 8 23:28:19.269065 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:19.270076 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:19.273359 systemd-logind[1188]: New session 5 of user core. Feb 8 23:28:19.274308 systemd[1]: Started session-5.scope. Feb 8 23:28:19.329675 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:28:19.329899 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:28:19.847719 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:28:19.853004 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:28:19.853238 systemd[1]: Reached target network-online.target. Feb 8 23:28:19.854406 systemd[1]: Starting docker.service... Feb 8 23:28:19.889374 env[1337]: time="2024-02-08T23:28:19.889316085Z" level=info msg="Starting up" Feb 8 23:28:19.890378 env[1337]: time="2024-02-08T23:28:19.890361114Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:28:19.890378 env[1337]: time="2024-02-08T23:28:19.890374870Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:28:19.890450 env[1337]: time="2024-02-08T23:28:19.890392864Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:28:19.890450 env[1337]: time="2024-02-08T23:28:19.890402872Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:28:19.891943 env[1337]: time="2024-02-08T23:28:19.891907904Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:28:19.891943 env[1337]: time="2024-02-08T23:28:19.891932981Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:28:19.892030 env[1337]: time="2024-02-08T23:28:19.891952819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:28:19.892030 env[1337]: time="2024-02-08T23:28:19.891962467Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:28:20.582597 env[1337]: time="2024-02-08T23:28:20.582544708Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 8 23:28:20.582597 env[1337]: time="2024-02-08T23:28:20.582571328Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 8 23:28:20.582832 env[1337]: time="2024-02-08T23:28:20.582699418Z" level=info msg="Loading containers: start." Feb 8 23:28:20.672016 kernel: Initializing XFRM netlink socket Feb 8 23:28:20.699950 env[1337]: time="2024-02-08T23:28:20.699913204Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:28:20.743781 systemd-networkd[1077]: docker0: Link UP Feb 8 23:28:20.751780 env[1337]: time="2024-02-08T23:28:20.751750820Z" level=info msg="Loading containers: done." Feb 8 23:28:20.762593 env[1337]: time="2024-02-08T23:28:20.762553204Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:28:20.762733 env[1337]: time="2024-02-08T23:28:20.762708946Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:28:20.762825 env[1337]: time="2024-02-08T23:28:20.762796170Z" level=info msg="Daemon has completed initialization" Feb 8 23:28:20.780101 systemd[1]: Started docker.service. Feb 8 23:28:20.786666 env[1337]: time="2024-02-08T23:28:20.786616753Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:28:20.801735 systemd[1]: Reloading. Feb 8 23:28:20.853520 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2024-02-08T23:28:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:28:20.853548 /usr/lib/systemd/system-generators/torcx-generator[1477]: time="2024-02-08T23:28:20Z" level=info msg="torcx already run" Feb 8 23:28:20.920938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:28:20.920954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:28:20.938847 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:28:21.009456 systemd[1]: Started kubelet.service. Feb 8 23:28:21.065340 kubelet[1521]: E0208 23:28:21.065265 1521 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:28:21.067328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:28:21.067494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:28:21.445000 env[1215]: time="2024-02-08T23:28:21.444932295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:28:22.289588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397637057.mount: Deactivated successfully. Feb 8 23:28:24.413958 env[1215]: time="2024-02-08T23:28:24.413900790Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:24.415882 env[1215]: time="2024-02-08T23:28:24.415841949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:24.417768 env[1215]: time="2024-02-08T23:28:24.417742774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:24.419533 env[1215]: time="2024-02-08T23:28:24.419497634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:24.420325 env[1215]: time="2024-02-08T23:28:24.420297083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:28:24.432628 env[1215]: time="2024-02-08T23:28:24.432583911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:28:26.596507 env[1215]: time="2024-02-08T23:28:26.596451993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:26.599161 env[1215]: time="2024-02-08T23:28:26.599130755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:26.600520 env[1215]: time="2024-02-08T23:28:26.600492889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:26.602925 env[1215]: time="2024-02-08T23:28:26.602888261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:26.603523 env[1215]: time="2024-02-08T23:28:26.603502462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:28:26.612204 env[1215]: time="2024-02-08T23:28:26.612165466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:28:28.359901 env[1215]: time="2024-02-08T23:28:28.359845946Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:28.361682 env[1215]: time="2024-02-08T23:28:28.361642785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:28.363507 env[1215]: time="2024-02-08T23:28:28.363485149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:28.365025 env[1215]: time="2024-02-08T23:28:28.365004298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:28.365732 env[1215]: time="2024-02-08T23:28:28.365677841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:28:28.374188 env[1215]: time="2024-02-08T23:28:28.374156649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:28:29.771476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446465184.mount: Deactivated successfully. Feb 8 23:28:30.712736 env[1215]: time="2024-02-08T23:28:30.712679188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:30.755730 env[1215]: time="2024-02-08T23:28:30.755659927Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:30.777184 env[1215]: time="2024-02-08T23:28:30.777140513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:30.810454 env[1215]: time="2024-02-08T23:28:30.810397940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:30.810929 env[1215]: time="2024-02-08T23:28:30.810902266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:28:30.820144 env[1215]: time="2024-02-08T23:28:30.820106584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:28:31.213910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:28:31.214201 systemd[1]: Stopped kubelet.service. Feb 8 23:28:31.216440 systemd[1]: Started kubelet.service. Feb 8 23:28:31.354677 kubelet[1572]: E0208 23:28:31.354605 1572 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:28:31.357929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:28:31.358147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:28:33.093040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685873725.mount: Deactivated successfully. Feb 8 23:28:33.181265 env[1215]: time="2024-02-08T23:28:33.181191678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:33.218549 env[1215]: time="2024-02-08T23:28:33.218478411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:33.266711 env[1215]: time="2024-02-08T23:28:33.266624585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:33.289483 env[1215]: time="2024-02-08T23:28:33.289431778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:33.290011 env[1215]: time="2024-02-08T23:28:33.289947255Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:28:33.301226 env[1215]: time="2024-02-08T23:28:33.301175568Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:28:34.537645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622378508.mount: Deactivated successfully. Feb 8 23:28:41.031833 env[1215]: time="2024-02-08T23:28:41.031784998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:41.034937 env[1215]: time="2024-02-08T23:28:41.034897705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:41.036472 env[1215]: time="2024-02-08T23:28:41.036419338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:41.037943 env[1215]: time="2024-02-08T23:28:41.037917266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:41.038414 env[1215]: time="2024-02-08T23:28:41.038391205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:28:41.046823 env[1215]: time="2024-02-08T23:28:41.046788109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:28:41.463807 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:28:41.464034 systemd[1]: Stopped kubelet.service. Feb 8 23:28:41.465382 systemd[1]: Started kubelet.service. Feb 8 23:28:41.504268 kubelet[1594]: E0208 23:28:41.504209 1594 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:28:41.506231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:28:41.506371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:28:41.661250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726845045.mount: Deactivated successfully. Feb 8 23:28:42.257701 env[1215]: time="2024-02-08T23:28:42.257638604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:42.259605 env[1215]: time="2024-02-08T23:28:42.259562761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:42.261048 env[1215]: time="2024-02-08T23:28:42.261018841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:42.262348 env[1215]: time="2024-02-08T23:28:42.262313409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:42.262774 env[1215]: time="2024-02-08T23:28:42.262744367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:28:44.019922 systemd[1]: Stopped kubelet.service. Feb 8 23:28:44.033926 systemd[1]: Reloading. Feb 8 23:28:44.092077 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-02-08T23:28:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:28:44.092100 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-02-08T23:28:44Z" level=info msg="torcx already run" Feb 8 23:28:44.162679 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:28:44.162693 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:28:44.181773 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:28:44.256948 systemd[1]: Started kubelet.service. Feb 8 23:28:44.298502 kubelet[1740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:28:44.298502 kubelet[1740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:28:44.298502 kubelet[1740]: I0208 23:28:44.298457 1740 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:28:44.299946 kubelet[1740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:28:44.299946 kubelet[1740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:28:44.521469 kubelet[1740]: I0208 23:28:44.521426 1740 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:28:44.521469 kubelet[1740]: I0208 23:28:44.521467 1740 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:28:44.521747 kubelet[1740]: I0208 23:28:44.521729 1740 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:28:44.524886 kubelet[1740]: I0208 23:28:44.524863 1740 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:28:44.525354 kubelet[1740]: E0208 23:28:44.525336 1740 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.529461 kubelet[1740]: I0208 23:28:44.529440 1740 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:28:44.529818 kubelet[1740]: I0208 23:28:44.529801 1740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:28:44.529898 kubelet[1740]: I0208 23:28:44.529883 1740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:28:44.530015 kubelet[1740]: I0208 23:28:44.529912 1740 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:28:44.530015 kubelet[1740]: I0208 23:28:44.529928 1740 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:28:44.530068 kubelet[1740]: I0208 23:28:44.530043 1740 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:28:44.532913 kubelet[1740]: I0208 23:28:44.532895 1740 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:28:44.533007 kubelet[1740]: I0208 23:28:44.532921 1740 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:28:44.533007 kubelet[1740]: I0208 23:28:44.532952 1740 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:28:44.533007 kubelet[1740]: I0208 23:28:44.532986 1740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:28:44.533792 kubelet[1740]: W0208 23:28:44.533762 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.533890 kubelet[1740]: E0208 23:28:44.533876 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.533985 kubelet[1740]: I0208 23:28:44.533881 1740 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:28:44.534510 kubelet[1740]: W0208 23:28:44.534498 1740 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:28:44.535495 kubelet[1740]: I0208 23:28:44.535464 1740 server.go:1186] "Started kubelet" Feb 8 23:28:44.535649 kubelet[1740]: W0208 23:28:44.533794 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.535649 kubelet[1740]: E0208 23:28:44.535563 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.536836 kubelet[1740]: I0208 23:28:44.536809 1740 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:28:44.537245 kubelet[1740]: E0208 23:28:44.536997 1740 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047a076ed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 535410385, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 535410385, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.126:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.126:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:28:44.538377 kubelet[1740]: E0208 23:28:44.538356 1740 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:28:44.538377 kubelet[1740]: E0208 23:28:44.538379 1740 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:28:44.538504 kubelet[1740]: I0208 23:28:44.538487 1740 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:28:44.539340 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:28:44.539475 kubelet[1740]: I0208 23:28:44.539445 1740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:28:44.540073 kubelet[1740]: I0208 23:28:44.539784 1740 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:28:44.540073 kubelet[1740]: I0208 23:28:44.539861 1740 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:28:44.542112 kubelet[1740]: E0208 23:28:44.542079 1740 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.542286 kubelet[1740]: W0208 23:28:44.542237 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.542386 kubelet[1740]: E0208 23:28:44.542363 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.571189 kubelet[1740]: I0208 23:28:44.571094 1740 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:28:44.571189 kubelet[1740]: I0208 23:28:44.571125 1740 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:28:44.571189 kubelet[1740]: I0208 23:28:44.571141 1740 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:28:44.574644 kubelet[1740]: I0208 23:28:44.574606 1740 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:28:44.633230 kubelet[1740]: I0208 23:28:44.633211 1740 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:28:44.633230 kubelet[1740]: I0208 23:28:44.633231 1740 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:28:44.633376 kubelet[1740]: I0208 23:28:44.633254 1740 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:28:44.633376 kubelet[1740]: E0208 23:28:44.633305 1740 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:28:44.633989 kubelet[1740]: W0208 23:28:44.633915 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.634055 kubelet[1740]: E0208 23:28:44.634019 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.641356 kubelet[1740]: I0208 23:28:44.641333 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:44.641709 kubelet[1740]: E0208 23:28:44.641686 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Feb 8 23:28:44.647319 kubelet[1740]: I0208 23:28:44.647296 1740 policy_none.go:49] "None policy: Start" Feb 8 23:28:44.647840 kubelet[1740]: I0208 23:28:44.647814 1740 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:28:44.647840 kubelet[1740]: I0208 23:28:44.647837 1740 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:28:44.654103 kubelet[1740]: I0208 23:28:44.654076 1740 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:28:44.654337 kubelet[1740]: I0208 23:28:44.654314 1740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:28:44.656780 kubelet[1740]: E0208 23:28:44.656753 1740 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 8 23:28:44.733538 kubelet[1740]: I0208 23:28:44.733475 1740 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:44.734754 kubelet[1740]: I0208 23:28:44.734738 1740 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:44.735463 kubelet[1740]: I0208 23:28:44.735437 1740 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:44.736313 kubelet[1740]: I0208 23:28:44.736298 1740 status_manager.go:698] "Failed to get status for pod" podUID=3e10164c0ee8b907ee499217f13559d9 pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.126:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.126:6443: connect: connection refused" Feb 8 23:28:44.736714 kubelet[1740]: I0208 23:28:44.736695 1740 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.126:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.126:6443: connect: connection refused" Feb 8 23:28:44.737826 kubelet[1740]: I0208 23:28:44.737807 1740 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.126:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.126:6443: connect: connection refused" Feb 8 23:28:44.740769 kubelet[1740]: I0208 23:28:44.740753 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:44.740850 kubelet[1740]: I0208 23:28:44.740781 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:44.740850 kubelet[1740]: I0208 23:28:44.740801 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:44.740850 kubelet[1740]: I0208 23:28:44.740817 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:44.740850 kubelet[1740]: I0208 23:28:44.740834 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:44.740850 kubelet[1740]: I0208 23:28:44.740851 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:44.741075 kubelet[1740]: I0208 23:28:44.740869 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:44.741075 kubelet[1740]: I0208 23:28:44.740885 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:28:44.741075 kubelet[1740]: I0208 23:28:44.740902 1740 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:44.742425 kubelet[1740]: E0208 23:28:44.742392 1740 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:44.842774 kubelet[1740]: I0208 23:28:44.842688 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:44.843016 kubelet[1740]: E0208 23:28:44.842999 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Feb 8 23:28:45.039904 kubelet[1740]: E0208 23:28:45.039857 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.039904 kubelet[1740]: E0208 23:28:45.039857 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.040612 env[1215]: time="2024-02-08T23:28:45.040558427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e10164c0ee8b907ee499217f13559d9,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:45.040612 env[1215]: time="2024-02-08T23:28:45.040596178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:45.041621 kubelet[1740]: E0208 23:28:45.041607 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.041872 env[1215]: time="2024-02-08T23:28:45.041846533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:45.143594 kubelet[1740]: E0208 23:28:45.143497 1740 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.244714 kubelet[1740]: I0208 23:28:45.244694 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:45.244941 kubelet[1740]: E0208 23:28:45.244924 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Feb 8 23:28:45.385840 kubelet[1740]: W0208 23:28:45.385780 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.386222 kubelet[1740]: E0208 23:28:45.385852 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.563827 kubelet[1740]: W0208 23:28:45.563741 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.563827 kubelet[1740]: E0208 23:28:45.563829 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.589125 kubelet[1740]: W0208 23:28:45.589068 1740 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.589211 kubelet[1740]: E0208 23:28:45.589129 1740 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.741623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436389217.mount: Deactivated successfully. Feb 8 23:28:45.746040 env[1215]: time="2024-02-08T23:28:45.745984251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.750062 env[1215]: time="2024-02-08T23:28:45.750019437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.751459 env[1215]: time="2024-02-08T23:28:45.751438167Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.752607 env[1215]: time="2024-02-08T23:28:45.752582452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.756045 env[1215]: time="2024-02-08T23:28:45.756013295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.757591 env[1215]: time="2024-02-08T23:28:45.757564734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.759809 env[1215]: time="2024-02-08T23:28:45.759774758Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.761663 env[1215]: time="2024-02-08T23:28:45.761641738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.762203 env[1215]: time="2024-02-08T23:28:45.762179857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.763397 env[1215]: time="2024-02-08T23:28:45.763366843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.764838 env[1215]: time="2024-02-08T23:28:45.764812494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.766473 env[1215]: time="2024-02-08T23:28:45.766442380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:45.787579 env[1215]: time="2024-02-08T23:28:45.787506355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:45.787579 env[1215]: time="2024-02-08T23:28:45.787544446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:45.787579 env[1215]: time="2024-02-08T23:28:45.787555016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:45.788196 env[1215]: time="2024-02-08T23:28:45.788105468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcf740806ad0d64c5f1285e0ac1b5b1ba73ba85f0afa911b88dc58c3d358353c pid=1818 runtime=io.containerd.runc.v2 Feb 8 23:28:45.791909 env[1215]: time="2024-02-08T23:28:45.791837706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:45.792022 env[1215]: time="2024-02-08T23:28:45.791880396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:45.792022 env[1215]: time="2024-02-08T23:28:45.791892388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:45.792114 env[1215]: time="2024-02-08T23:28:45.792040356Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eceaae5f0187d16b50ba52b7a63b7762a50a5d36970d4258cc5e1ac980f4ec00 pid=1832 runtime=io.containerd.runc.v2 Feb 8 23:28:45.863062 env[1215]: time="2024-02-08T23:28:45.862728466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:45.863062 env[1215]: time="2024-02-08T23:28:45.862764504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:45.863062 env[1215]: time="2024-02-08T23:28:45.862774001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:45.863062 env[1215]: time="2024-02-08T23:28:45.862920246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7bfaf882e9acb8079946598cfce6a2fdd1191eac20e65ebb831e336f11132b6 pid=1883 runtime=io.containerd.runc.v2 Feb 8 23:28:45.888099 env[1215]: time="2024-02-08T23:28:45.888058861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eceaae5f0187d16b50ba52b7a63b7762a50a5d36970d4258cc5e1ac980f4ec00\"" Feb 8 23:28:45.889428 kubelet[1740]: E0208 23:28:45.889402 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.891554 env[1215]: time="2024-02-08T23:28:45.891531692Z" level=info msg="CreateContainer within sandbox \"eceaae5f0187d16b50ba52b7a63b7762a50a5d36970d4258cc5e1ac980f4ec00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:28:45.893046 env[1215]: time="2024-02-08T23:28:45.893026946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e10164c0ee8b907ee499217f13559d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcf740806ad0d64c5f1285e0ac1b5b1ba73ba85f0afa911b88dc58c3d358353c\"" Feb 8 23:28:45.893525 kubelet[1740]: E0208 23:28:45.893510 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.894932 env[1215]: time="2024-02-08T23:28:45.894914114Z" level=info msg="CreateContainer within sandbox \"dcf740806ad0d64c5f1285e0ac1b5b1ba73ba85f0afa911b88dc58c3d358353c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:28:45.909416 env[1215]: time="2024-02-08T23:28:45.909248311Z" level=info msg="CreateContainer within sandbox \"eceaae5f0187d16b50ba52b7a63b7762a50a5d36970d4258cc5e1ac980f4ec00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"328ae851315aff19a15af3e3fe4aeffb64a5eef52abc68351e32067530adb288\"" Feb 8 23:28:45.910811 env[1215]: time="2024-02-08T23:28:45.910777458Z" level=info msg="StartContainer for \"328ae851315aff19a15af3e3fe4aeffb64a5eef52abc68351e32067530adb288\"" Feb 8 23:28:45.918166 env[1215]: time="2024-02-08T23:28:45.918131256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7bfaf882e9acb8079946598cfce6a2fdd1191eac20e65ebb831e336f11132b6\"" Feb 8 23:28:45.919061 kubelet[1740]: E0208 23:28:45.918867 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:45.919337 env[1215]: time="2024-02-08T23:28:45.919300138Z" level=info msg="CreateContainer within sandbox \"dcf740806ad0d64c5f1285e0ac1b5b1ba73ba85f0afa911b88dc58c3d358353c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8687dfe5a6500289ed29835fb48c7d49c497aad3d3dfcf7fb1ea1cce70cac5f\"" Feb 8 23:28:45.919709 env[1215]: time="2024-02-08T23:28:45.919683897Z" level=info msg="StartContainer for \"c8687dfe5a6500289ed29835fb48c7d49c497aad3d3dfcf7fb1ea1cce70cac5f\"" Feb 8 23:28:45.920726 env[1215]: time="2024-02-08T23:28:45.920695775Z" level=info msg="CreateContainer within sandbox \"b7bfaf882e9acb8079946598cfce6a2fdd1191eac20e65ebb831e336f11132b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:28:45.944641 kubelet[1740]: E0208 23:28:45.944607 1740 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.126:6443: connect: connection refused Feb 8 23:28:45.950177 env[1215]: time="2024-02-08T23:28:45.950130054Z" level=info msg="CreateContainer within sandbox \"b7bfaf882e9acb8079946598cfce6a2fdd1191eac20e65ebb831e336f11132b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8deabb33c4657117bbf799106014cb2078b838bb85d33346b1ac61a22d50898\"" Feb 8 23:28:45.950687 env[1215]: time="2024-02-08T23:28:45.950658154Z" level=info msg="StartContainer for \"c8deabb33c4657117bbf799106014cb2078b838bb85d33346b1ac61a22d50898\"" Feb 8 23:28:46.013386 env[1215]: time="2024-02-08T23:28:46.013321959Z" level=info msg="StartContainer for \"c8687dfe5a6500289ed29835fb48c7d49c497aad3d3dfcf7fb1ea1cce70cac5f\" returns successfully" Feb 8 23:28:46.025172 env[1215]: time="2024-02-08T23:28:46.025115772Z" level=info msg="StartContainer for \"328ae851315aff19a15af3e3fe4aeffb64a5eef52abc68351e32067530adb288\" returns successfully" Feb 8 23:28:46.026605 env[1215]: time="2024-02-08T23:28:46.026567374Z" level=info msg="StartContainer for \"c8deabb33c4657117bbf799106014cb2078b838bb85d33346b1ac61a22d50898\" returns successfully" Feb 8 23:28:46.046791 kubelet[1740]: I0208 23:28:46.046747 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:46.047171 kubelet[1740]: E0208 23:28:46.047147 1740 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Feb 8 23:28:46.642239 kubelet[1740]: E0208 23:28:46.642204 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:46.645228 kubelet[1740]: E0208 23:28:46.645179 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:46.647256 kubelet[1740]: E0208 23:28:46.647175 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:47.650505 kubelet[1740]: I0208 23:28:47.650462 1740 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:47.651788 kubelet[1740]: E0208 23:28:47.651768 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:47.652789 kubelet[1740]: E0208 23:28:47.652771 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:47.653046 kubelet[1740]: E0208 23:28:47.653017 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:47.681103 kubelet[1740]: I0208 23:28:47.681062 1740 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:28:47.689253 kubelet[1740]: E0208 23:28:47.689185 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:47.790021 kubelet[1740]: E0208 23:28:47.789958 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:47.809616 kubelet[1740]: E0208 23:28:47.809530 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047a076ed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 535410385, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 535410385, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:47.863818 kubelet[1740]: E0208 23:28:47.863716 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047a349e85", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 538371717, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 538371717, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:47.890789 kubelet[1740]: E0208 23:28:47.890758 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:47.917091 kubelet[1740]: E0208 23:28:47.916923 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c1fdae7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570565351, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570565351, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:47.970470 kubelet[1740]: E0208 23:28:47.970363 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200336", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570575670, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570575670, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:47.991593 kubelet[1740]: E0208 23:28:47.991564 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:48.023514 kubelet[1740]: E0208 23:28:48.023440 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200f30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570578736, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570578736, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.078117 kubelet[1740]: E0208 23:28:48.078019 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c1fdae7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570565351, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 641285912, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.092169 kubelet[1740]: E0208 23:28:48.092149 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:48.133902 kubelet[1740]: E0208 23:28:48.133821 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200336", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570575670, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 641296872, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.189617 kubelet[1740]: E0208 23:28:48.189474 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200f30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570578736, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 641302753, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.192641 kubelet[1740]: E0208 23:28:48.192618 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:48.242625 kubelet[1740]: E0208 23:28:48.242553 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207048122a803", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 654635011, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 654635011, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.293661 kubelet[1740]: E0208 23:28:48.293618 1740 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:28:48.464351 kubelet[1740]: E0208 23:28:48.464249 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c1fdae7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570565351, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 734654347, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:48.535892 kubelet[1740]: I0208 23:28:48.535859 1740 apiserver.go:52] "Watching apiserver" Feb 8 23:28:48.540157 kubelet[1740]: I0208 23:28:48.540114 1740 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:28:48.564265 kubelet[1740]: I0208 23:28:48.564238 1740 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:28:48.822564 kubelet[1740]: E0208 23:28:48.822455 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:48.848686 kubelet[1740]: E0208 23:28:48.848653 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:48.889286 kubelet[1740]: E0208 23:28:48.889193 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200336", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570575670, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 734665017, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:49.262677 kubelet[1740]: E0208 23:28:49.262566 1740 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b207047c200f30", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 570578736, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 28, 44, 734668694, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:28:49.653226 kubelet[1740]: E0208 23:28:49.653130 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:49.653390 kubelet[1740]: E0208 23:28:49.653361 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:49.716479 kubelet[1740]: E0208 23:28:49.716456 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:50.653916 kubelet[1740]: E0208 23:28:50.653888 1740 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:50.969386 systemd[1]: Reloading. Feb 8 23:28:51.035368 /usr/lib/systemd/system-generators/torcx-generator[2071]: time="2024-02-08T23:28:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:28:51.035394 /usr/lib/systemd/system-generators/torcx-generator[2071]: time="2024-02-08T23:28:51Z" level=info msg="torcx already run" Feb 8 23:28:51.095655 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:28:51.095672 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:28:51.112390 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:28:51.188674 systemd[1]: Stopping kubelet.service... Feb 8 23:28:51.202398 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:28:51.202734 systemd[1]: Stopped kubelet.service. Feb 8 23:28:51.204535 systemd[1]: Started kubelet.service. Feb 8 23:28:51.259744 sudo[2130]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:28:51.259998 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:28:51.260735 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:28:51.260940 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:28:51.261086 kubelet[2119]: I0208 23:28:51.261051 2119 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:28:51.262375 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:28:51.262375 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:28:51.265599 kubelet[2119]: I0208 23:28:51.265570 2119 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:28:51.265599 kubelet[2119]: I0208 23:28:51.265587 2119 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:28:51.265765 kubelet[2119]: I0208 23:28:51.265750 2119 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:28:51.266859 kubelet[2119]: I0208 23:28:51.266831 2119 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:28:51.267607 kubelet[2119]: I0208 23:28:51.267585 2119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:28:51.271854 kubelet[2119]: I0208 23:28:51.271825 2119 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:28:51.272199 kubelet[2119]: I0208 23:28:51.272176 2119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:28:51.272257 kubelet[2119]: I0208 23:28:51.272243 2119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:28:51.272364 kubelet[2119]: I0208 23:28:51.272261 2119 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:28:51.272364 kubelet[2119]: I0208 23:28:51.272271 2119 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:28:51.272364 kubelet[2119]: I0208 23:28:51.272301 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:28:51.277665 kubelet[2119]: I0208 23:28:51.277626 2119 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:28:51.277665 kubelet[2119]: I0208 23:28:51.277672 2119 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:28:51.277790 kubelet[2119]: I0208 23:28:51.277706 2119 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:28:51.277790 kubelet[2119]: I0208 23:28:51.277728 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:28:51.282426 kubelet[2119]: I0208 23:28:51.282413 2119 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:28:51.282906 kubelet[2119]: I0208 23:28:51.282894 2119 server.go:1186] "Started kubelet" Feb 8 23:28:51.284697 kubelet[2119]: I0208 23:28:51.284684 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:28:51.289164 kubelet[2119]: I0208 23:28:51.289132 2119 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:28:51.290383 kubelet[2119]: I0208 23:28:51.290372 2119 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:28:51.293300 kubelet[2119]: I0208 23:28:51.293167 2119 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:28:51.293483 kubelet[2119]: I0208 23:28:51.293458 2119 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:28:51.294204 kubelet[2119]: E0208 23:28:51.294192 2119 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:28:51.294293 kubelet[2119]: E0208 23:28:51.294278 2119 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:28:51.322949 kubelet[2119]: I0208 23:28:51.322917 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:28:51.340915 kubelet[2119]: I0208 23:28:51.340887 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:28:51.340915 kubelet[2119]: I0208 23:28:51.340913 2119 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:28:51.341068 kubelet[2119]: I0208 23:28:51.340933 2119 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:28:51.341068 kubelet[2119]: E0208 23:28:51.341016 2119 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:28:51.357201 kubelet[2119]: I0208 23:28:51.357169 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:28:51.357201 kubelet[2119]: I0208 23:28:51.357196 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:28:51.357337 kubelet[2119]: I0208 23:28:51.357229 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:28:51.357428 kubelet[2119]: I0208 23:28:51.357413 2119 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:28:51.357458 kubelet[2119]: I0208 23:28:51.357435 2119 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:28:51.357458 kubelet[2119]: I0208 23:28:51.357443 2119 policy_none.go:49] "None policy: Start" Feb 8 23:28:51.357993 kubelet[2119]: I0208 23:28:51.357976 2119 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:28:51.358077 kubelet[2119]: I0208 23:28:51.358064 2119 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:28:51.358299 kubelet[2119]: I0208 23:28:51.358287 2119 state_mem.go:75] "Updated machine memory state" Feb 8 23:28:51.359411 kubelet[2119]: I0208 23:28:51.359400 2119 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:28:51.359672 kubelet[2119]: I0208 23:28:51.359661 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:28:51.397806 kubelet[2119]: I0208 23:28:51.397759 2119 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:28:51.405637 kubelet[2119]: I0208 23:28:51.405590 2119 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 8 23:28:51.405832 kubelet[2119]: I0208 23:28:51.405654 2119 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:28:51.441792 kubelet[2119]: I0208 23:28:51.441738 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:51.442006 kubelet[2119]: I0208 23:28:51.441839 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:51.442006 kubelet[2119]: I0208 23:28:51.441885 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:51.446857 kubelet[2119]: E0208 23:28:51.446825 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 8 23:28:51.485575 kubelet[2119]: E0208 23:28:51.485536 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.494773 kubelet[2119]: I0208 23:28:51.494745 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.494904 kubelet[2119]: I0208 23:28:51.494785 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:28:51.494904 kubelet[2119]: I0208 23:28:51.494805 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:51.494904 kubelet[2119]: I0208 23:28:51.494821 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:51.494904 kubelet[2119]: I0208 23:28:51.494841 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.494904 kubelet[2119]: I0208 23:28:51.494857 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.495095 kubelet[2119]: I0208 23:28:51.494874 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e10164c0ee8b907ee499217f13559d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e10164c0ee8b907ee499217f13559d9\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:51.495095 kubelet[2119]: I0208 23:28:51.494897 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.495095 kubelet[2119]: I0208 23:28:51.494929 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:51.685459 kubelet[2119]: E0208 23:28:51.685344 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:51.686343 kubelet[2119]: E0208 23:28:51.686315 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:51.747353 kubelet[2119]: E0208 23:28:51.747324 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:51.749010 sudo[2130]: pam_unix(sudo:session): session closed for user root Feb 8 23:28:51.786830 kubelet[2119]: E0208 23:28:51.786789 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:52.278562 kubelet[2119]: I0208 23:28:52.278522 2119 apiserver.go:52] "Watching apiserver" Feb 8 23:28:52.293864 kubelet[2119]: I0208 23:28:52.293824 2119 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:28:52.299042 kubelet[2119]: I0208 23:28:52.299002 2119 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:28:52.686947 kubelet[2119]: E0208 23:28:52.686829 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 8 23:28:52.687533 kubelet[2119]: E0208 23:28:52.687508 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:52.777792 sudo[1319]: pam_unix(sudo:session): session closed for user root Feb 8 23:28:52.779049 sshd[1313]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:52.781488 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:59006.service: Deactivated successfully. Feb 8 23:28:52.782620 systemd-logind[1188]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:28:52.782698 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:28:52.783578 systemd-logind[1188]: Removed session 5. Feb 8 23:28:52.910504 kubelet[2119]: E0208 23:28:52.910447 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 8 23:28:52.910838 kubelet[2119]: E0208 23:28:52.910818 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:53.134448 kubelet[2119]: E0208 23:28:53.134403 2119 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 8 23:28:53.134808 kubelet[2119]: E0208 23:28:53.134787 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:53.287269 kubelet[2119]: I0208 23:28:53.287215 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.287132593 pod.CreationTimestamp="2024-02-08 23:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:53.28694206 +0000 UTC m=+2.079054040" watchObservedRunningTime="2024-02-08 23:28:53.287132593 +0000 UTC m=+2.079244584" Feb 8 23:28:53.349292 kubelet[2119]: E0208 23:28:53.349247 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:53.349292 kubelet[2119]: E0208 23:28:53.349247 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:53.349525 kubelet[2119]: E0208 23:28:53.349503 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:53.686352 kubelet[2119]: I0208 23:28:53.686314 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.686266693 pod.CreationTimestamp="2024-02-08 23:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:53.68622176 +0000 UTC m=+2.478333740" watchObservedRunningTime="2024-02-08 23:28:53.686266693 +0000 UTC m=+2.478378673" Feb 8 23:28:54.085727 kubelet[2119]: I0208 23:28:54.085691 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.085649662 pod.CreationTimestamp="2024-02-08 23:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:54.085555718 +0000 UTC m=+2.877667718" watchObservedRunningTime="2024-02-08 23:28:54.085649662 +0000 UTC m=+2.877761642" Feb 8 23:28:54.350766 kubelet[2119]: E0208 23:28:54.350659 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:54.841816 kubelet[2119]: E0208 23:28:54.841758 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:55.351833 kubelet[2119]: E0208 23:28:55.351805 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:02.969049 kubelet[2119]: E0208 23:29:02.969016 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:03.892596 update_engine[1190]: I0208 23:29:03.892531 1190 update_attempter.cc:509] Updating boot flags... Feb 8 23:29:04.176301 kubelet[2119]: E0208 23:29:04.176032 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:04.365436 kubelet[2119]: E0208 23:29:04.365398 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:04.479249 kubelet[2119]: I0208 23:29:04.479220 2119 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:29:04.479525 env[1215]: time="2024-02-08T23:29:04.479492320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:29:04.479877 kubelet[2119]: I0208 23:29:04.479713 2119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:29:04.846548 kubelet[2119]: E0208 23:29:04.846436 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:05.187069 kubelet[2119]: I0208 23:29:05.186934 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:29:05.197068 kubelet[2119]: I0208 23:29:05.197028 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:29:05.289582 kubelet[2119]: I0208 23:29:05.289541 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a1f4b21-81a6-4cce-97e5-1d0a51cc501f-kube-proxy\") pod \"kube-proxy-mhlb8\" (UID: \"3a1f4b21-81a6-4cce-97e5-1d0a51cc501f\") " pod="kube-system/kube-proxy-mhlb8" Feb 8 23:29:05.289582 kubelet[2119]: I0208 23:29:05.289585 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvf6\" (UniqueName: \"kubernetes.io/projected/3a1f4b21-81a6-4cce-97e5-1d0a51cc501f-kube-api-access-fmvf6\") pod \"kube-proxy-mhlb8\" (UID: \"3a1f4b21-81a6-4cce-97e5-1d0a51cc501f\") " pod="kube-system/kube-proxy-mhlb8" Feb 8 23:29:05.289817 kubelet[2119]: I0208 23:29:05.289604 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a1f4b21-81a6-4cce-97e5-1d0a51cc501f-xtables-lock\") pod \"kube-proxy-mhlb8\" (UID: \"3a1f4b21-81a6-4cce-97e5-1d0a51cc501f\") " pod="kube-system/kube-proxy-mhlb8" Feb 8 23:29:05.289817 kubelet[2119]: I0208 23:29:05.289623 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a1f4b21-81a6-4cce-97e5-1d0a51cc501f-lib-modules\") pod \"kube-proxy-mhlb8\" (UID: \"3a1f4b21-81a6-4cce-97e5-1d0a51cc501f\") " pod="kube-system/kube-proxy-mhlb8" Feb 8 23:29:05.289817 kubelet[2119]: I0208 23:29:05.289727 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4cff64a-e3ae-441e-8945-9c14e1d55415-clustermesh-secrets\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.289817 kubelet[2119]: I0208 23:29:05.289796 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-bpf-maps\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.289817 kubelet[2119]: I0208 23:29:05.289815 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-cgroup\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290034 kubelet[2119]: I0208 23:29:05.289836 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-lib-modules\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290034 kubelet[2119]: I0208 23:29:05.289856 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-kernel\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290034 kubelet[2119]: I0208 23:29:05.289876 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-xtables-lock\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290034 kubelet[2119]: I0208 23:29:05.289894 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-config-path\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290034 kubelet[2119]: I0208 23:29:05.289910 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9r95\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-kube-api-access-p9r95\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.289930 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-hubble-tls\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.289950 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-hostproc\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.289984 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cni-path\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.290001 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-etc-cni-netd\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.290047 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-net\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.290194 kubelet[2119]: I0208 23:29:05.290090 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-run\") pod \"cilium-t69tx\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " pod="kube-system/cilium-t69tx" Feb 8 23:29:05.533520 kubelet[2119]: I0208 23:29:05.528465 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:29:05.593343 kubelet[2119]: I0208 23:29:05.593301 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs49n\" (UniqueName: \"kubernetes.io/projected/844930d1-9ca9-46f8-8c6c-39ad3eada113-kube-api-access-fs49n\") pod \"cilium-operator-f59cbd8c6-b84sb\" (UID: \"844930d1-9ca9-46f8-8c6c-39ad3eada113\") " pod="kube-system/cilium-operator-f59cbd8c6-b84sb" Feb 8 23:29:05.593522 kubelet[2119]: I0208 23:29:05.593380 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/844930d1-9ca9-46f8-8c6c-39ad3eada113-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-b84sb\" (UID: \"844930d1-9ca9-46f8-8c6c-39ad3eada113\") " pod="kube-system/cilium-operator-f59cbd8c6-b84sb" Feb 8 23:29:05.790399 kubelet[2119]: E0208 23:29:05.790298 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:05.791158 env[1215]: time="2024-02-08T23:29:05.791120228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhlb8,Uid:3a1f4b21-81a6-4cce-97e5-1d0a51cc501f,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:05.805822 env[1215]: time="2024-02-08T23:29:05.805736078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:05.805822 env[1215]: time="2024-02-08T23:29:05.805777826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:05.805822 env[1215]: time="2024-02-08T23:29:05.805791913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:05.806026 env[1215]: time="2024-02-08T23:29:05.805936051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff6b52dea533088dcc0514126b43391ecdf8df2ac567bb09f3f2363c4e9462fc pid=2246 runtime=io.containerd.runc.v2 Feb 8 23:29:05.830799 env[1215]: time="2024-02-08T23:29:05.830752530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mhlb8,Uid:3a1f4b21-81a6-4cce-97e5-1d0a51cc501f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6b52dea533088dcc0514126b43391ecdf8df2ac567bb09f3f2363c4e9462fc\"" Feb 8 23:29:05.831405 kubelet[2119]: E0208 23:29:05.831379 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:05.833227 env[1215]: time="2024-02-08T23:29:05.833201385Z" level=info msg="CreateContainer within sandbox \"ff6b52dea533088dcc0514126b43391ecdf8df2ac567bb09f3f2363c4e9462fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:29:06.104122 kubelet[2119]: E0208 23:29:06.103989 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:06.104674 env[1215]: time="2024-02-08T23:29:06.104637795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t69tx,Uid:a4cff64a-e3ae-441e-8945-9c14e1d55415,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:06.218675 env[1215]: time="2024-02-08T23:29:06.218620192Z" level=info msg="CreateContainer within sandbox \"ff6b52dea533088dcc0514126b43391ecdf8df2ac567bb09f3f2363c4e9462fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"989a82676dbef9b607e555cbf151cbf865adef8982e6b7550f9dc73c750c5a9f\"" Feb 8 23:29:06.219187 env[1215]: time="2024-02-08T23:29:06.219157554Z" level=info msg="StartContainer for \"989a82676dbef9b607e555cbf151cbf865adef8982e6b7550f9dc73c750c5a9f\"" Feb 8 23:29:06.264179 env[1215]: time="2024-02-08T23:29:06.264131554Z" level=info msg="StartContainer for \"989a82676dbef9b607e555cbf151cbf865adef8982e6b7550f9dc73c750c5a9f\" returns successfully" Feb 8 23:29:06.274943 env[1215]: time="2024-02-08T23:29:06.274874582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:06.274943 env[1215]: time="2024-02-08T23:29:06.274913904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:06.274943 env[1215]: time="2024-02-08T23:29:06.274925867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:06.275217 env[1215]: time="2024-02-08T23:29:06.275176415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c pid=2321 runtime=io.containerd.runc.v2 Feb 8 23:29:06.309060 env[1215]: time="2024-02-08T23:29:06.309005779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t69tx,Uid:a4cff64a-e3ae-441e-8945-9c14e1d55415,Namespace:kube-system,Attempt:0,} returns sandbox id \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\"" Feb 8 23:29:06.309753 kubelet[2119]: E0208 23:29:06.309730 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:06.311086 env[1215]: time="2024-02-08T23:29:06.311051795Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:29:06.369761 kubelet[2119]: E0208 23:29:06.369652 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:06.432063 kubelet[2119]: E0208 23:29:06.432025 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:06.432419 env[1215]: time="2024-02-08T23:29:06.432384280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-b84sb,Uid:844930d1-9ca9-46f8-8c6c-39ad3eada113,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:06.727548 env[1215]: time="2024-02-08T23:29:06.727487994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:06.727548 env[1215]: time="2024-02-08T23:29:06.727524182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:06.727548 env[1215]: time="2024-02-08T23:29:06.727533500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:06.727767 env[1215]: time="2024-02-08T23:29:06.727667589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd pid=2466 runtime=io.containerd.runc.v2 Feb 8 23:29:06.770927 env[1215]: time="2024-02-08T23:29:06.770872169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-b84sb,Uid:844930d1-9ca9-46f8-8c6c-39ad3eada113,Namespace:kube-system,Attempt:0,} returns sandbox id \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\"" Feb 8 23:29:06.771464 kubelet[2119]: E0208 23:29:06.771437 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:07.372816 kubelet[2119]: E0208 23:29:07.372788 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:11.183087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584738821.mount: Deactivated successfully. Feb 8 23:29:14.778863 env[1215]: time="2024-02-08T23:29:14.778807536Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:14.782011 env[1215]: time="2024-02-08T23:29:14.781959457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:14.783804 env[1215]: time="2024-02-08T23:29:14.783778377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:14.784275 env[1215]: time="2024-02-08T23:29:14.784246351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:29:14.785187 env[1215]: time="2024-02-08T23:29:14.785144210Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:29:14.787076 env[1215]: time="2024-02-08T23:29:14.787044832Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:29:14.797439 env[1215]: time="2024-02-08T23:29:14.797402101Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\"" Feb 8 23:29:14.797846 env[1215]: time="2024-02-08T23:29:14.797813159Z" level=info msg="StartContainer for \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\"" Feb 8 23:29:14.834598 env[1215]: time="2024-02-08T23:29:14.834549933Z" level=info msg="StartContainer for \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\" returns successfully" Feb 8 23:29:15.389338 kubelet[2119]: E0208 23:29:15.389282 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:15.405757 kubelet[2119]: I0208 23:29:15.405726 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mhlb8" podStartSLOduration=10.405681362 pod.CreationTimestamp="2024-02-08 23:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:06.981758511 +0000 UTC m=+15.773870491" watchObservedRunningTime="2024-02-08 23:29:15.405681362 +0000 UTC m=+24.197793342" Feb 8 23:29:15.581869 env[1215]: time="2024-02-08T23:29:15.581821899Z" level=info msg="shim disconnected" id=aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5 Feb 8 23:29:15.581869 env[1215]: time="2024-02-08T23:29:15.581863637Z" level=warning msg="cleaning up after shim disconnected" id=aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5 namespace=k8s.io Feb 8 23:29:15.581869 env[1215]: time="2024-02-08T23:29:15.581871341Z" level=info msg="cleaning up dead shim" Feb 8 23:29:15.587392 env[1215]: time="2024-02-08T23:29:15.587355463Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2549 runtime=io.containerd.runc.v2\n" Feb 8 23:29:15.794694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5-rootfs.mount: Deactivated successfully. Feb 8 23:29:16.391352 kubelet[2119]: E0208 23:29:16.391324 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:16.392913 env[1215]: time="2024-02-08T23:29:16.392880782Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:29:16.416949 env[1215]: time="2024-02-08T23:29:16.416901746Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\"" Feb 8 23:29:16.417452 env[1215]: time="2024-02-08T23:29:16.417421228Z" level=info msg="StartContainer for \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\"" Feb 8 23:29:16.454641 env[1215]: time="2024-02-08T23:29:16.454595606Z" level=info msg="StartContainer for \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\" returns successfully" Feb 8 23:29:16.462671 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:29:16.462931 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:29:16.463092 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:29:16.464421 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:29:16.472804 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:29:16.489562 env[1215]: time="2024-02-08T23:29:16.489516952Z" level=info msg="shim disconnected" id=c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff Feb 8 23:29:16.489562 env[1215]: time="2024-02-08T23:29:16.489558219Z" level=warning msg="cleaning up after shim disconnected" id=c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff namespace=k8s.io Feb 8 23:29:16.489562 env[1215]: time="2024-02-08T23:29:16.489565763Z" level=info msg="cleaning up dead shim" Feb 8 23:29:16.497731 env[1215]: time="2024-02-08T23:29:16.497699733Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615 runtime=io.containerd.runc.v2\n" Feb 8 23:29:16.795131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff-rootfs.mount: Deactivated successfully. Feb 8 23:29:17.053851 env[1215]: time="2024-02-08T23:29:17.053711801Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:17.055585 env[1215]: time="2024-02-08T23:29:17.055542965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:17.057068 env[1215]: time="2024-02-08T23:29:17.057031669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:17.057535 env[1215]: time="2024-02-08T23:29:17.057501348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:29:17.059141 env[1215]: time="2024-02-08T23:29:17.059105438Z" level=info msg="CreateContainer within sandbox \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:29:17.068311 env[1215]: time="2024-02-08T23:29:17.068269375Z" level=info msg="CreateContainer within sandbox \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\"" Feb 8 23:29:17.068756 env[1215]: time="2024-02-08T23:29:17.068669784Z" level=info msg="StartContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\"" Feb 8 23:29:17.109444 env[1215]: time="2024-02-08T23:29:17.109395516Z" level=info msg="StartContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" returns successfully" Feb 8 23:29:17.394455 kubelet[2119]: E0208 23:29:17.393720 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:17.399346 kubelet[2119]: E0208 23:29:17.399288 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:17.402021 env[1215]: time="2024-02-08T23:29:17.401983472Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:29:17.457275 kubelet[2119]: I0208 23:29:17.456861 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-b84sb" podStartSLOduration=-9.223372024397951e+09 pod.CreationTimestamp="2024-02-08 23:29:05 +0000 UTC" firstStartedPulling="2024-02-08 23:29:06.771821761 +0000 UTC m=+15.563933741" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:17.40923278 +0000 UTC m=+26.201344760" watchObservedRunningTime="2024-02-08 23:29:17.456823741 +0000 UTC m=+26.248935721" Feb 8 23:29:17.633496 env[1215]: time="2024-02-08T23:29:17.633429276Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\"" Feb 8 23:29:17.634285 env[1215]: time="2024-02-08T23:29:17.634241084Z" level=info msg="StartContainer for \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\"" Feb 8 23:29:17.720279 env[1215]: time="2024-02-08T23:29:17.720227068Z" level=info msg="StartContainer for \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\" returns successfully" Feb 8 23:29:17.738708 env[1215]: time="2024-02-08T23:29:17.738643919Z" level=info msg="shim disconnected" id=2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889 Feb 8 23:29:17.738880 env[1215]: time="2024-02-08T23:29:17.738731263Z" level=warning msg="cleaning up after shim disconnected" id=2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889 namespace=k8s.io Feb 8 23:29:17.738880 env[1215]: time="2024-02-08T23:29:17.738746181Z" level=info msg="cleaning up dead shim" Feb 8 23:29:17.746351 env[1215]: time="2024-02-08T23:29:17.746301440Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2712 runtime=io.containerd.runc.v2\n" Feb 8 23:29:18.402987 kubelet[2119]: E0208 23:29:18.402932 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:18.403476 kubelet[2119]: E0208 23:29:18.403042 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:18.405007 env[1215]: time="2024-02-08T23:29:18.404935101Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:29:18.423878 env[1215]: time="2024-02-08T23:29:18.423823200Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\"" Feb 8 23:29:18.424510 env[1215]: time="2024-02-08T23:29:18.424461204Z" level=info msg="StartContainer for \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\"" Feb 8 23:29:18.463021 env[1215]: time="2024-02-08T23:29:18.462958559Z" level=info msg="StartContainer for \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\" returns successfully" Feb 8 23:29:18.477612 env[1215]: time="2024-02-08T23:29:18.477549281Z" level=info msg="shim disconnected" id=405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073 Feb 8 23:29:18.477788 env[1215]: time="2024-02-08T23:29:18.477621756Z" level=warning msg="cleaning up after shim disconnected" id=405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073 namespace=k8s.io Feb 8 23:29:18.477788 env[1215]: time="2024-02-08T23:29:18.477638467Z" level=info msg="cleaning up dead shim" Feb 8 23:29:18.483569 env[1215]: time="2024-02-08T23:29:18.483529747Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2766 runtime=io.containerd.runc.v2\n" Feb 8 23:29:18.794952 systemd[1]: run-containerd-runc-k8s.io-405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073-runc.L0hmW5.mount: Deactivated successfully. Feb 8 23:29:18.795104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073-rootfs.mount: Deactivated successfully. Feb 8 23:29:19.405477 kubelet[2119]: E0208 23:29:19.405445 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:19.407699 env[1215]: time="2024-02-08T23:29:19.407645496Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:29:19.437042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559743729.mount: Deactivated successfully. Feb 8 23:29:19.454611 env[1215]: time="2024-02-08T23:29:19.454555842Z" level=info msg="CreateContainer within sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\"" Feb 8 23:29:19.455131 env[1215]: time="2024-02-08T23:29:19.455093078Z" level=info msg="StartContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\"" Feb 8 23:29:19.502761 env[1215]: time="2024-02-08T23:29:19.502714794Z" level=info msg="StartContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" returns successfully" Feb 8 23:29:19.608886 kubelet[2119]: I0208 23:29:19.608853 2119 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:29:19.629000 kubelet[2119]: I0208 23:29:19.626056 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:29:19.629000 kubelet[2119]: I0208 23:29:19.626234 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:29:19.793692 kubelet[2119]: I0208 23:29:19.793647 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgrn\" (UniqueName: \"kubernetes.io/projected/6093a5ae-c2e9-44a5-8fb6-65151ed8a89b-kube-api-access-vjgrn\") pod \"coredns-787d4945fb-kdgmd\" (UID: \"6093a5ae-c2e9-44a5-8fb6-65151ed8a89b\") " pod="kube-system/coredns-787d4945fb-kdgmd" Feb 8 23:29:19.793866 kubelet[2119]: I0208 23:29:19.793705 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9afa8525-1fa1-414d-9eac-3aa21f6e337b-config-volume\") pod \"coredns-787d4945fb-55g7g\" (UID: \"9afa8525-1fa1-414d-9eac-3aa21f6e337b\") " pod="kube-system/coredns-787d4945fb-55g7g" Feb 8 23:29:19.793866 kubelet[2119]: I0208 23:29:19.793737 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tnxf\" (UniqueName: \"kubernetes.io/projected/9afa8525-1fa1-414d-9eac-3aa21f6e337b-kube-api-access-2tnxf\") pod \"coredns-787d4945fb-55g7g\" (UID: \"9afa8525-1fa1-414d-9eac-3aa21f6e337b\") " pod="kube-system/coredns-787d4945fb-55g7g" Feb 8 23:29:19.793866 kubelet[2119]: I0208 23:29:19.793765 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6093a5ae-c2e9-44a5-8fb6-65151ed8a89b-config-volume\") pod \"coredns-787d4945fb-kdgmd\" (UID: \"6093a5ae-c2e9-44a5-8fb6-65151ed8a89b\") " pod="kube-system/coredns-787d4945fb-kdgmd" Feb 8 23:29:19.932727 kubelet[2119]: E0208 23:29:19.932696 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:19.933427 env[1215]: time="2024-02-08T23:29:19.933377832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-55g7g,Uid:9afa8525-1fa1-414d-9eac-3aa21f6e337b,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:19.936612 kubelet[2119]: E0208 23:29:19.936588 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:19.937413 env[1215]: time="2024-02-08T23:29:19.937380831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kdgmd,Uid:6093a5ae-c2e9-44a5-8fb6-65151ed8a89b,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:20.409296 kubelet[2119]: E0208 23:29:20.409257 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:20.421904 kubelet[2119]: I0208 23:29:20.421854 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t69tx" podStartSLOduration=-9.22337202143297e+09 pod.CreationTimestamp="2024-02-08 23:29:05 +0000 UTC" firstStartedPulling="2024-02-08 23:29:06.310538096 +0000 UTC m=+15.102650077" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:20.421596723 +0000 UTC m=+29.213708703" watchObservedRunningTime="2024-02-08 23:29:20.421805404 +0000 UTC m=+29.213917384" Feb 8 23:29:21.380480 systemd-networkd[1077]: cilium_host: Link UP Feb 8 23:29:21.381552 systemd-networkd[1077]: cilium_net: Link UP Feb 8 23:29:21.381561 systemd-networkd[1077]: cilium_net: Gained carrier Feb 8 23:29:21.381766 systemd-networkd[1077]: cilium_host: Gained carrier Feb 8 23:29:21.385047 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:29:21.384669 systemd-networkd[1077]: cilium_host: Gained IPv6LL Feb 8 23:29:21.411511 kubelet[2119]: E0208 23:29:21.411479 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:21.456604 systemd-networkd[1077]: cilium_vxlan: Link UP Feb 8 23:29:21.456610 systemd-networkd[1077]: cilium_vxlan: Gained carrier Feb 8 23:29:21.646018 kernel: NET: Registered PF_ALG protocol family Feb 8 23:29:21.662078 systemd-networkd[1077]: cilium_net: Gained IPv6LL Feb 8 23:29:22.178864 systemd-networkd[1077]: lxc_health: Link UP Feb 8 23:29:22.190345 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:29:22.190066 systemd-networkd[1077]: lxc_health: Gained carrier Feb 8 23:29:22.415642 kubelet[2119]: E0208 23:29:22.413164 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:22.475438 systemd-networkd[1077]: lxc210fb994dbdc: Link UP Feb 8 23:29:22.493134 systemd-networkd[1077]: lxc41c4e9523964: Link UP Feb 8 23:29:22.494090 kernel: eth0: renamed from tmp88563 Feb 8 23:29:22.501085 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:29:22.501161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc210fb994dbdc: link becomes ready Feb 8 23:29:22.501187 kernel: eth0: renamed from tmpf49bf Feb 8 23:29:22.505356 systemd-networkd[1077]: lxc210fb994dbdc: Gained carrier Feb 8 23:29:22.506545 systemd-networkd[1077]: lxc41c4e9523964: Gained carrier Feb 8 23:29:22.507014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc41c4e9523964: link becomes ready Feb 8 23:29:23.135088 systemd-networkd[1077]: cilium_vxlan: Gained IPv6LL Feb 8 23:29:23.774641 systemd-networkd[1077]: lxc210fb994dbdc: Gained IPv6LL Feb 8 23:29:23.774948 systemd-networkd[1077]: lxc41c4e9523964: Gained IPv6LL Feb 8 23:29:24.030138 systemd-networkd[1077]: lxc_health: Gained IPv6LL Feb 8 23:29:24.107030 kubelet[2119]: E0208 23:29:24.106991 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:24.249469 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:47048.service. Feb 8 23:29:24.291814 sshd[3325]: Accepted publickey for core from 10.0.0.1 port 47048 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:24.292853 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:24.297033 systemd-logind[1188]: New session 6 of user core. Feb 8 23:29:24.297748 systemd[1]: Started session-6.scope. Feb 8 23:29:24.417090 kubelet[2119]: E0208 23:29:24.416936 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:24.450303 sshd[3325]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:24.452583 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:47048.service: Deactivated successfully. Feb 8 23:29:24.453321 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:29:24.454361 systemd-logind[1188]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:29:24.455201 systemd-logind[1188]: Removed session 6. Feb 8 23:29:25.418648 kubelet[2119]: E0208 23:29:25.418605 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:25.964184 env[1215]: time="2024-02-08T23:29:25.964108526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:25.964184 env[1215]: time="2024-02-08T23:29:25.964161394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:25.964184 env[1215]: time="2024-02-08T23:29:25.964171553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:25.964647 env[1215]: time="2024-02-08T23:29:25.964422564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f49bfd6335924b17b29bd41ad4d16a6e8c519d04bdb6df13ebd7e6ad0d04dd6b pid=3361 runtime=io.containerd.runc.v2 Feb 8 23:29:25.971630 env[1215]: time="2024-02-08T23:29:25.965540857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:25.971630 env[1215]: time="2024-02-08T23:29:25.965574540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:25.971630 env[1215]: time="2024-02-08T23:29:25.965584739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:25.971630 env[1215]: time="2024-02-08T23:29:25.965708922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8856310ca28ec4fb45b321256362a3060510732f5ce45d1ab100463bddb09e1f pid=3369 runtime=io.containerd.runc.v2 Feb 8 23:29:25.989906 systemd-resolved[1132]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:29:25.997255 systemd-resolved[1132]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:29:26.017936 env[1215]: time="2024-02-08T23:29:26.017889051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-55g7g,Uid:9afa8525-1fa1-414d-9eac-3aa21f6e337b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8856310ca28ec4fb45b321256362a3060510732f5ce45d1ab100463bddb09e1f\"" Feb 8 23:29:26.018512 kubelet[2119]: E0208 23:29:26.018488 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:26.021258 env[1215]: time="2024-02-08T23:29:26.021206362Z" level=info msg="CreateContainer within sandbox \"8856310ca28ec4fb45b321256362a3060510732f5ce45d1ab100463bddb09e1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:29:26.026134 env[1215]: time="2024-02-08T23:29:26.026042707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kdgmd,Uid:6093a5ae-c2e9-44a5-8fb6-65151ed8a89b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f49bfd6335924b17b29bd41ad4d16a6e8c519d04bdb6df13ebd7e6ad0d04dd6b\"" Feb 8 23:29:26.026786 kubelet[2119]: E0208 23:29:26.026768 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:26.030169 env[1215]: time="2024-02-08T23:29:26.030142112Z" level=info msg="CreateContainer within sandbox \"f49bfd6335924b17b29bd41ad4d16a6e8c519d04bdb6df13ebd7e6ad0d04dd6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:29:26.039644 env[1215]: time="2024-02-08T23:29:26.039598257Z" level=info msg="CreateContainer within sandbox \"8856310ca28ec4fb45b321256362a3060510732f5ce45d1ab100463bddb09e1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f9acdd064214f6df204a66826c3fa7f231bd5a9d7c0e1d2750f05c48197a878\"" Feb 8 23:29:26.040094 env[1215]: time="2024-02-08T23:29:26.040069999Z" level=info msg="StartContainer for \"2f9acdd064214f6df204a66826c3fa7f231bd5a9d7c0e1d2750f05c48197a878\"" Feb 8 23:29:26.046338 env[1215]: time="2024-02-08T23:29:26.046291407Z" level=info msg="CreateContainer within sandbox \"f49bfd6335924b17b29bd41ad4d16a6e8c519d04bdb6df13ebd7e6ad0d04dd6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49a988a8d75bad2524c86e2cc352e8ccb1af5edb2f305623b5036d63188b4328\"" Feb 8 23:29:26.047899 env[1215]: time="2024-02-08T23:29:26.047848553Z" level=info msg="StartContainer for \"49a988a8d75bad2524c86e2cc352e8ccb1af5edb2f305623b5036d63188b4328\"" Feb 8 23:29:26.083519 env[1215]: time="2024-02-08T23:29:26.083469285Z" level=info msg="StartContainer for \"2f9acdd064214f6df204a66826c3fa7f231bd5a9d7c0e1d2750f05c48197a878\" returns successfully" Feb 8 23:29:26.098457 env[1215]: time="2024-02-08T23:29:26.097844339Z" level=info msg="StartContainer for \"49a988a8d75bad2524c86e2cc352e8ccb1af5edb2f305623b5036d63188b4328\" returns successfully" Feb 8 23:29:26.421628 kubelet[2119]: E0208 23:29:26.421512 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:26.423311 kubelet[2119]: E0208 23:29:26.423283 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:26.430988 kubelet[2119]: I0208 23:29:26.430940 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-kdgmd" podStartSLOduration=21.430907815 pod.CreationTimestamp="2024-02-08 23:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:26.430087829 +0000 UTC m=+35.222199809" watchObservedRunningTime="2024-02-08 23:29:26.430907815 +0000 UTC m=+35.223019795" Feb 8 23:29:26.449272 kubelet[2119]: I0208 23:29:26.449235 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-55g7g" podStartSLOduration=21.449181258 pod.CreationTimestamp="2024-02-08 23:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:26.448981583 +0000 UTC m=+35.241093563" watchObservedRunningTime="2024-02-08 23:29:26.449181258 +0000 UTC m=+35.241293228" Feb 8 23:29:27.424552 kubelet[2119]: E0208 23:29:27.424505 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:27.425020 kubelet[2119]: E0208 23:29:27.424575 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:28.427282 kubelet[2119]: E0208 23:29:28.427242 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:28.427780 kubelet[2119]: E0208 23:29:28.427379 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:29.453799 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:52004.service. Feb 8 23:29:29.499234 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 52004 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:29.500750 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:29.504753 systemd-logind[1188]: New session 7 of user core. Feb 8 23:29:29.505506 systemd[1]: Started session-7.scope. Feb 8 23:29:29.621832 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:29.625337 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:52004.service: Deactivated successfully. Feb 8 23:29:29.626524 systemd-logind[1188]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:29:29.626574 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:29:29.627525 systemd-logind[1188]: Removed session 7. Feb 8 23:29:34.625567 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:52016.service. Feb 8 23:29:34.665931 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 52016 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:34.667317 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:34.671553 systemd-logind[1188]: New session 8 of user core. Feb 8 23:29:34.672380 systemd[1]: Started session-8.scope. Feb 8 23:29:34.797701 sshd[3578]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:34.800323 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:52016.service: Deactivated successfully. Feb 8 23:29:34.801317 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:29:34.802404 systemd-logind[1188]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:29:34.803541 systemd-logind[1188]: Removed session 8. Feb 8 23:29:39.800781 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:44808.service. Feb 8 23:29:39.840531 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 44808 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:39.841594 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:39.845160 systemd-logind[1188]: New session 9 of user core. Feb 8 23:29:39.845879 systemd[1]: Started session-9.scope. Feb 8 23:29:39.944062 sshd[3595]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:39.945917 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:44808.service: Deactivated successfully. Feb 8 23:29:39.946687 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:29:39.947425 systemd-logind[1188]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:29:39.948090 systemd-logind[1188]: Removed session 9. Feb 8 23:29:44.946790 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:44822.service. Feb 8 23:29:44.985586 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 44822 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:44.986390 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:44.989285 systemd-logind[1188]: New session 10 of user core. Feb 8 23:29:44.989982 systemd[1]: Started session-10.scope. Feb 8 23:29:45.093444 sshd[3610]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:45.095395 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:44822.service: Deactivated successfully. Feb 8 23:29:45.096237 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:29:45.097013 systemd-logind[1188]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:29:45.097609 systemd-logind[1188]: Removed session 10. Feb 8 23:29:50.097202 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:51040.service. Feb 8 23:29:50.213160 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 51040 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:50.214058 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:50.217284 systemd-logind[1188]: New session 11 of user core. Feb 8 23:29:50.218208 systemd[1]: Started session-11.scope. Feb 8 23:29:50.316251 sshd[3625]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:50.318591 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:51054.service. Feb 8 23:29:50.319002 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:51040.service: Deactivated successfully. Feb 8 23:29:50.319805 systemd-logind[1188]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:29:50.319823 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:29:50.320540 systemd-logind[1188]: Removed session 11. Feb 8 23:29:50.359484 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 51054 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:50.360838 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:50.364376 systemd-logind[1188]: New session 12 of user core. Feb 8 23:29:50.365142 systemd[1]: Started session-12.scope. Feb 8 23:29:51.191437 sshd[3639]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:51.193706 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:51058.service. Feb 8 23:29:51.195575 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:51054.service: Deactivated successfully. Feb 8 23:29:51.196667 systemd-logind[1188]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:29:51.196766 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:29:51.198061 systemd-logind[1188]: Removed session 12. Feb 8 23:29:51.236925 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 51058 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:51.238114 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:51.242130 systemd-logind[1188]: New session 13 of user core. Feb 8 23:29:51.242831 systemd[1]: Started session-13.scope. Feb 8 23:29:51.461839 sshd[3651]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:51.465207 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:51058.service: Deactivated successfully. Feb 8 23:29:51.466078 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:29:51.466202 systemd-logind[1188]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:29:51.466951 systemd-logind[1188]: Removed session 13. Feb 8 23:29:56.465476 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:51066.service. Feb 8 23:29:56.504164 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 51066 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:29:56.505093 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:56.508023 systemd-logind[1188]: New session 14 of user core. Feb 8 23:29:56.508879 systemd[1]: Started session-14.scope. Feb 8 23:29:56.609095 sshd[3670]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:56.611242 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:51066.service: Deactivated successfully. Feb 8 23:29:56.612231 systemd-logind[1188]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:29:56.612249 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:29:56.612995 systemd-logind[1188]: Removed session 14. Feb 8 23:30:01.612455 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:34056.service. Feb 8 23:30:01.652358 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:01.653638 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:01.657127 systemd-logind[1188]: New session 15 of user core. Feb 8 23:30:01.658092 systemd[1]: Started session-15.scope. Feb 8 23:30:01.763065 sshd[3684]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:01.765875 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:34070.service. Feb 8 23:30:01.766327 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:34056.service: Deactivated successfully. Feb 8 23:30:01.767233 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:30:01.772059 systemd-logind[1188]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:30:01.772884 systemd-logind[1188]: Removed session 15. Feb 8 23:30:01.804989 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 34070 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:01.805881 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:01.809766 systemd-logind[1188]: New session 16 of user core. Feb 8 23:30:01.810444 systemd[1]: Started session-16.scope. Feb 8 23:30:02.011786 sshd[3698]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:02.014735 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:34078.service. Feb 8 23:30:02.015283 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:34070.service: Deactivated successfully. Feb 8 23:30:02.016350 systemd-logind[1188]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:30:02.016441 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:30:02.017262 systemd-logind[1188]: Removed session 16. Feb 8 23:30:02.058042 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 34078 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:02.059464 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:02.062884 systemd-logind[1188]: New session 17 of user core. Feb 8 23:30:02.063641 systemd[1]: Started session-17.scope. Feb 8 23:30:02.883350 sshd[3710]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:02.885828 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:34088.service. Feb 8 23:30:02.893243 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:34078.service: Deactivated successfully. Feb 8 23:30:02.894617 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:30:02.895409 systemd-logind[1188]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:30:02.896717 systemd-logind[1188]: Removed session 17. Feb 8 23:30:02.933793 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:02.934790 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:02.938208 systemd-logind[1188]: New session 18 of user core. Feb 8 23:30:02.938934 systemd[1]: Started session-18.scope. Feb 8 23:30:03.139817 sshd[3738]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:03.142490 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:34104.service. Feb 8 23:30:03.142904 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:34088.service: Deactivated successfully. Feb 8 23:30:03.144469 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:30:03.145432 systemd-logind[1188]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:30:03.146640 systemd-logind[1188]: Removed session 18. Feb 8 23:30:03.184358 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 34104 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:03.185685 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:03.188883 systemd-logind[1188]: New session 19 of user core. Feb 8 23:30:03.189639 systemd[1]: Started session-19.scope. Feb 8 23:30:03.310012 sshd[3790]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:03.312517 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:34104.service: Deactivated successfully. Feb 8 23:30:03.313409 systemd-logind[1188]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:30:03.313432 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:30:03.314567 systemd-logind[1188]: Removed session 19. Feb 8 23:30:05.342944 kubelet[2119]: E0208 23:30:05.342897 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:08.313326 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:58212.service. Feb 8 23:30:08.341925 kubelet[2119]: E0208 23:30:08.341889 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:08.353236 sshd[3808]: Accepted publickey for core from 10.0.0.1 port 58212 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:08.354541 sshd[3808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:08.358120 systemd-logind[1188]: New session 20 of user core. Feb 8 23:30:08.359056 systemd[1]: Started session-20.scope. Feb 8 23:30:08.455339 sshd[3808]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:08.457383 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:58212.service: Deactivated successfully. Feb 8 23:30:08.458538 systemd-logind[1188]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:30:08.458623 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:30:08.459392 systemd-logind[1188]: Removed session 20. Feb 8 23:30:13.459152 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:58222.service. Feb 8 23:30:13.504209 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 58222 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:13.505723 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:13.511535 systemd-logind[1188]: New session 21 of user core. Feb 8 23:30:13.512619 systemd[1]: Started session-21.scope. Feb 8 23:30:13.627837 sshd[3849]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:13.630947 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:58222.service: Deactivated successfully. Feb 8 23:30:13.632241 systemd-logind[1188]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:30:13.632250 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:30:13.633155 systemd-logind[1188]: Removed session 21. Feb 8 23:30:18.630853 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:40336.service. Feb 8 23:30:18.670010 sshd[3864]: Accepted publickey for core from 10.0.0.1 port 40336 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:18.671147 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:18.674814 systemd-logind[1188]: New session 22 of user core. Feb 8 23:30:18.675882 systemd[1]: Started session-22.scope. Feb 8 23:30:18.779891 sshd[3864]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:18.782447 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:40336.service: Deactivated successfully. Feb 8 23:30:18.783528 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:30:18.784635 systemd-logind[1188]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:30:18.785411 systemd-logind[1188]: Removed session 22. Feb 8 23:30:23.783182 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:40348.service. Feb 8 23:30:23.822199 sshd[3878]: Accepted publickey for core from 10.0.0.1 port 40348 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:23.823022 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:23.825818 systemd-logind[1188]: New session 23 of user core. Feb 8 23:30:23.826815 systemd[1]: Started session-23.scope. Feb 8 23:30:23.920495 sshd[3878]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:23.923527 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:40362.service. Feb 8 23:30:23.924089 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:40348.service: Deactivated successfully. Feb 8 23:30:23.925020 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:30:23.927054 systemd-logind[1188]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:30:23.927782 systemd-logind[1188]: Removed session 23. Feb 8 23:30:23.961937 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 40362 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:23.962791 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:23.965912 systemd-logind[1188]: New session 24 of user core. Feb 8 23:30:23.966557 systemd[1]: Started session-24.scope. Feb 8 23:30:24.342247 kubelet[2119]: E0208 23:30:24.342217 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:25.594473 env[1215]: time="2024-02-08T23:30:25.594404798Z" level=info msg="StopContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" with timeout 30 (s)" Feb 8 23:30:25.596158 env[1215]: time="2024-02-08T23:30:25.596114744Z" level=info msg="Stop container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" with signal terminated" Feb 8 23:30:25.607360 systemd[1]: run-containerd-runc-k8s.io-96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33-runc.Rh8guo.mount: Deactivated successfully. Feb 8 23:30:25.621992 env[1215]: time="2024-02-08T23:30:25.621921381Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:30:25.625621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed-rootfs.mount: Deactivated successfully. Feb 8 23:30:25.627897 env[1215]: time="2024-02-08T23:30:25.627870135Z" level=info msg="StopContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" with timeout 1 (s)" Feb 8 23:30:25.628342 env[1215]: time="2024-02-08T23:30:25.628283871Z" level=info msg="Stop container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" with signal terminated" Feb 8 23:30:25.634165 systemd-networkd[1077]: lxc_health: Link DOWN Feb 8 23:30:25.634172 systemd-networkd[1077]: lxc_health: Lost carrier Feb 8 23:30:25.640460 env[1215]: time="2024-02-08T23:30:25.640425933Z" level=info msg="shim disconnected" id=c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed Feb 8 23:30:25.640558 env[1215]: time="2024-02-08T23:30:25.640462873Z" level=warning msg="cleaning up after shim disconnected" id=c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed namespace=k8s.io Feb 8 23:30:25.640558 env[1215]: time="2024-02-08T23:30:25.640472191Z" level=info msg="cleaning up dead shim" Feb 8 23:30:25.646411 env[1215]: time="2024-02-08T23:30:25.646362805Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Feb 8 23:30:25.648891 env[1215]: time="2024-02-08T23:30:25.648864455Z" level=info msg="StopContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" returns successfully" Feb 8 23:30:25.651657 env[1215]: time="2024-02-08T23:30:25.651625787Z" level=info msg="StopPodSandbox for \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\"" Feb 8 23:30:25.651809 env[1215]: time="2024-02-08T23:30:25.651696552Z" level=info msg="Container to stop \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.653358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd-shm.mount: Deactivated successfully. Feb 8 23:30:25.689184 env[1215]: time="2024-02-08T23:30:25.689128619Z" level=info msg="shim disconnected" id=06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd Feb 8 23:30:25.689184 env[1215]: time="2024-02-08T23:30:25.689183865Z" level=warning msg="cleaning up after shim disconnected" id=06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd namespace=k8s.io Feb 8 23:30:25.689406 env[1215]: time="2024-02-08T23:30:25.689196078Z" level=info msg="cleaning up dead shim" Feb 8 23:30:25.689406 env[1215]: time="2024-02-08T23:30:25.689129170Z" level=info msg="shim disconnected" id=96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33 Feb 8 23:30:25.689406 env[1215]: time="2024-02-08T23:30:25.689256402Z" level=warning msg="cleaning up after shim disconnected" id=96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33 namespace=k8s.io Feb 8 23:30:25.689406 env[1215]: time="2024-02-08T23:30:25.689268616Z" level=info msg="cleaning up dead shim" Feb 8 23:30:25.696245 env[1215]: time="2024-02-08T23:30:25.696189755Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3994 runtime=io.containerd.runc.v2\n" Feb 8 23:30:25.696563 env[1215]: time="2024-02-08T23:30:25.696529430Z" level=info msg="TearDown network for sandbox \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" successfully" Feb 8 23:30:25.696622 env[1215]: time="2024-02-08T23:30:25.696557584Z" level=info msg="StopPodSandbox for \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" returns successfully" Feb 8 23:30:25.697181 env[1215]: time="2024-02-08T23:30:25.697155529Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Feb 8 23:30:25.699617 env[1215]: time="2024-02-08T23:30:25.699575794Z" level=info msg="StopContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" returns successfully" Feb 8 23:30:25.700133 env[1215]: time="2024-02-08T23:30:25.700108135Z" level=info msg="StopPodSandbox for \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\"" Feb 8 23:30:25.700186 env[1215]: time="2024-02-08T23:30:25.700151367Z" level=info msg="Container to stop \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.700186 env[1215]: time="2024-02-08T23:30:25.700163720Z" level=info msg="Container to stop \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.700186 env[1215]: time="2024-02-08T23:30:25.700172156Z" level=info msg="Container to stop \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.700186 env[1215]: time="2024-02-08T23:30:25.700181194Z" level=info msg="Container to stop \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.700299 env[1215]: time="2024-02-08T23:30:25.700190401Z" level=info msg="Container to stop \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:25.722724 env[1215]: time="2024-02-08T23:30:25.722674399Z" level=info msg="shim disconnected" id=a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c Feb 8 23:30:25.722978 env[1215]: time="2024-02-08T23:30:25.722948640Z" level=warning msg="cleaning up after shim disconnected" id=a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c namespace=k8s.io Feb 8 23:30:25.723317 env[1215]: time="2024-02-08T23:30:25.723294276Z" level=info msg="cleaning up dead shim" Feb 8 23:30:25.729495 env[1215]: time="2024-02-08T23:30:25.729461215Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Feb 8 23:30:25.729747 env[1215]: time="2024-02-08T23:30:25.729725366Z" level=info msg="TearDown network for sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" successfully" Feb 8 23:30:25.729798 env[1215]: time="2024-02-08T23:30:25.729746696Z" level=info msg="StopPodSandbox for \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" returns successfully" Feb 8 23:30:25.732278 kubelet[2119]: I0208 23:30:25.732261 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/844930d1-9ca9-46f8-8c6c-39ad3eada113-cilium-config-path\") pod \"844930d1-9ca9-46f8-8c6c-39ad3eada113\" (UID: \"844930d1-9ca9-46f8-8c6c-39ad3eada113\") " Feb 8 23:30:25.732532 kubelet[2119]: I0208 23:30:25.732332 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs49n\" (UniqueName: \"kubernetes.io/projected/844930d1-9ca9-46f8-8c6c-39ad3eada113-kube-api-access-fs49n\") pod \"844930d1-9ca9-46f8-8c6c-39ad3eada113\" (UID: \"844930d1-9ca9-46f8-8c6c-39ad3eada113\") " Feb 8 23:30:25.732532 kubelet[2119]: W0208 23:30:25.732400 2119 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/844930d1-9ca9-46f8-8c6c-39ad3eada113/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:30:25.733947 kubelet[2119]: I0208 23:30:25.733925 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/844930d1-9ca9-46f8-8c6c-39ad3eada113-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "844930d1-9ca9-46f8-8c6c-39ad3eada113" (UID: "844930d1-9ca9-46f8-8c6c-39ad3eada113"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:30:25.734433 kubelet[2119]: I0208 23:30:25.734410 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844930d1-9ca9-46f8-8c6c-39ad3eada113-kube-api-access-fs49n" (OuterVolumeSpecName: "kube-api-access-fs49n") pod "844930d1-9ca9-46f8-8c6c-39ad3eada113" (UID: "844930d1-9ca9-46f8-8c6c-39ad3eada113"). InnerVolumeSpecName "kube-api-access-fs49n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:25.832898 kubelet[2119]: I0208 23:30:25.832869 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-lib-modules\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833055 kubelet[2119]: I0208 23:30:25.832915 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-kernel\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833055 kubelet[2119]: I0208 23:30:25.832943 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-net\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833055 kubelet[2119]: I0208 23:30:25.832959 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833055 kubelet[2119]: I0208 23:30:25.832961 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833055 kubelet[2119]: I0208 23:30:25.833006 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833029 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-run\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833053 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-bpf-maps\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833079 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-hubble-tls\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833081 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833091 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833245 kubelet[2119]: I0208 23:30:25.833106 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9r95\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-kube-api-access-p9r95\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833131 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-cgroup\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833153 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-xtables-lock\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833176 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-hostproc\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833202 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-config-path\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833225 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-etc-cni-netd\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833450 kubelet[2119]: I0208 23:30:25.833246 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cni-path\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833272 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4cff64a-e3ae-441e-8945-9c14e1d55415-clustermesh-secrets\") pod \"a4cff64a-e3ae-441e-8945-9c14e1d55415\" (UID: \"a4cff64a-e3ae-441e-8945-9c14e1d55415\") " Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833319 2119 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833313 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-hostproc" (OuterVolumeSpecName: "hostproc") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833336 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833351 2119 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fs49n\" (UniqueName: \"kubernetes.io/projected/844930d1-9ca9-46f8-8c6c-39ad3eada113-kube-api-access-fs49n\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833348 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833655 kubelet[2119]: I0208 23:30:25.833362 2119 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833367 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833375 2119 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833387 2119 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833400 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/844930d1-9ca9-46f8-8c6c-39ad3eada113-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833619 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.833895 kubelet[2119]: I0208 23:30:25.833623 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cni-path" (OuterVolumeSpecName: "cni-path") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:25.834138 kubelet[2119]: W0208 23:30:25.833723 2119 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a4cff64a-e3ae-441e-8945-9c14e1d55415/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:30:25.835187 kubelet[2119]: I0208 23:30:25.835162 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-kube-api-access-p9r95" (OuterVolumeSpecName: "kube-api-access-p9r95") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "kube-api-access-p9r95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:25.835314 kubelet[2119]: I0208 23:30:25.835299 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:30:25.835509 kubelet[2119]: I0208 23:30:25.835491 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4cff64a-e3ae-441e-8945-9c14e1d55415-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:30:25.836028 kubelet[2119]: I0208 23:30:25.836004 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a4cff64a-e3ae-441e-8945-9c14e1d55415" (UID: "a4cff64a-e3ae-441e-8945-9c14e1d55415"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934425 2119 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934466 2119 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-p9r95\" (UniqueName: \"kubernetes.io/projected/a4cff64a-e3ae-441e-8945-9c14e1d55415-kube-api-access-p9r95\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934479 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934491 2119 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934502 2119 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934516 2119 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4cff64a-e3ae-441e-8945-9c14e1d55415-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934535 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4cff64a-e3ae-441e-8945-9c14e1d55415-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934538 kubelet[2119]: I0208 23:30:25.934546 2119 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:25.934823 kubelet[2119]: I0208 23:30:25.934556 2119 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4cff64a-e3ae-441e-8945-9c14e1d55415-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:26.380142 kubelet[2119]: E0208 23:30:26.380102 2119 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:30:26.529809 kubelet[2119]: I0208 23:30:26.529779 2119 scope.go:115] "RemoveContainer" containerID="96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33" Feb 8 23:30:26.531074 env[1215]: time="2024-02-08T23:30:26.531033095Z" level=info msg="RemoveContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\"" Feb 8 23:30:26.536074 env[1215]: time="2024-02-08T23:30:26.536034899Z" level=info msg="RemoveContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" returns successfully" Feb 8 23:30:26.536306 kubelet[2119]: I0208 23:30:26.536261 2119 scope.go:115] "RemoveContainer" containerID="405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073" Feb 8 23:30:26.537155 env[1215]: time="2024-02-08T23:30:26.537129657Z" level=info msg="RemoveContainer for \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\"" Feb 8 23:30:26.539779 env[1215]: time="2024-02-08T23:30:26.539752464Z" level=info msg="RemoveContainer for \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\" returns successfully" Feb 8 23:30:26.539931 kubelet[2119]: I0208 23:30:26.539890 2119 scope.go:115] "RemoveContainer" containerID="2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889" Feb 8 23:30:26.541988 env[1215]: time="2024-02-08T23:30:26.540823768Z" level=info msg="RemoveContainer for \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\"" Feb 8 23:30:26.544002 env[1215]: time="2024-02-08T23:30:26.543975470Z" level=info msg="RemoveContainer for \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\" returns successfully" Feb 8 23:30:26.544193 kubelet[2119]: I0208 23:30:26.544161 2119 scope.go:115] "RemoveContainer" containerID="c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff" Feb 8 23:30:26.547697 env[1215]: time="2024-02-08T23:30:26.547641658Z" level=info msg="RemoveContainer for \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\"" Feb 8 23:30:26.550263 env[1215]: time="2024-02-08T23:30:26.550237405Z" level=info msg="RemoveContainer for \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\" returns successfully" Feb 8 23:30:26.550407 kubelet[2119]: I0208 23:30:26.550389 2119 scope.go:115] "RemoveContainer" containerID="aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5" Feb 8 23:30:26.551272 env[1215]: time="2024-02-08T23:30:26.551246982Z" level=info msg="RemoveContainer for \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\"" Feb 8 23:30:26.555123 env[1215]: time="2024-02-08T23:30:26.555082672Z" level=info msg="RemoveContainer for \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\" returns successfully" Feb 8 23:30:26.556398 env[1215]: time="2024-02-08T23:30:26.555555509Z" level=error msg="ContainerStatus for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": not found" Feb 8 23:30:26.556537 kubelet[2119]: I0208 23:30:26.555332 2119 scope.go:115] "RemoveContainer" containerID="96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33" Feb 8 23:30:26.557476 kubelet[2119]: E0208 23:30:26.557459 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": not found" containerID="96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33" Feb 8 23:30:26.557577 kubelet[2119]: I0208 23:30:26.557527 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33} err="failed to get container status \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": rpc error: code = NotFound desc = an error occurred when try to find container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": not found" Feb 8 23:30:26.557577 kubelet[2119]: I0208 23:30:26.557540 2119 scope.go:115] "RemoveContainer" containerID="405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073" Feb 8 23:30:26.557834 env[1215]: time="2024-02-08T23:30:26.557737641Z" level=error msg="ContainerStatus for \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\": not found" Feb 8 23:30:26.558018 kubelet[2119]: E0208 23:30:26.558002 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\": not found" containerID="405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073" Feb 8 23:30:26.558077 kubelet[2119]: I0208 23:30:26.558036 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073} err="failed to get container status \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\": rpc error: code = NotFound desc = an error occurred when try to find container \"405e4fd24b934755ba6e3e03675e73a96e2ace99e64c52e08d2717a93c75b073\": not found" Feb 8 23:30:26.558077 kubelet[2119]: I0208 23:30:26.558049 2119 scope.go:115] "RemoveContainer" containerID="2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889" Feb 8 23:30:26.559322 env[1215]: time="2024-02-08T23:30:26.559265161Z" level=error msg="ContainerStatus for \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\": not found" Feb 8 23:30:26.559467 kubelet[2119]: E0208 23:30:26.559440 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\": not found" containerID="2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889" Feb 8 23:30:26.559467 kubelet[2119]: I0208 23:30:26.559465 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889} err="failed to get container status \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e260ec7854f6113307eb0069a6aa7204de8eaa50763ba375d728c31031bb889\": not found" Feb 8 23:30:26.559586 kubelet[2119]: I0208 23:30:26.559473 2119 scope.go:115] "RemoveContainer" containerID="c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff" Feb 8 23:30:26.559680 env[1215]: time="2024-02-08T23:30:26.559638499Z" level=error msg="ContainerStatus for \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\": not found" Feb 8 23:30:26.559778 kubelet[2119]: E0208 23:30:26.559763 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\": not found" containerID="c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff" Feb 8 23:30:26.559826 kubelet[2119]: I0208 23:30:26.559793 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff} err="failed to get container status \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\": rpc error: code = NotFound desc = an error occurred when try to find container \"c072c3bfb05384c68e3ec05f5f0b429f02fecc14835f6459d39a11160f5b4fff\": not found" Feb 8 23:30:26.559826 kubelet[2119]: I0208 23:30:26.559805 2119 scope.go:115] "RemoveContainer" containerID="aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5" Feb 8 23:30:26.560072 env[1215]: time="2024-02-08T23:30:26.560012629Z" level=error msg="ContainerStatus for \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\": not found" Feb 8 23:30:26.560263 kubelet[2119]: E0208 23:30:26.560244 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\": not found" containerID="aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5" Feb 8 23:30:26.560263 kubelet[2119]: I0208 23:30:26.560271 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5} err="failed to get container status \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"aae3e8a842443efc20904764e94aa7b59d586a15bb634d6a090b05e88be337f5\": not found" Feb 8 23:30:26.560399 kubelet[2119]: I0208 23:30:26.560279 2119 scope.go:115] "RemoveContainer" containerID="c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed" Feb 8 23:30:26.561335 env[1215]: time="2024-02-08T23:30:26.561289462Z" level=info msg="RemoveContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\"" Feb 8 23:30:26.563955 env[1215]: time="2024-02-08T23:30:26.563924754Z" level=info msg="RemoveContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" returns successfully" Feb 8 23:30:26.564085 kubelet[2119]: I0208 23:30:26.564067 2119 scope.go:115] "RemoveContainer" containerID="c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed" Feb 8 23:30:26.564257 env[1215]: time="2024-02-08T23:30:26.564207291Z" level=error msg="ContainerStatus for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": not found" Feb 8 23:30:26.564331 kubelet[2119]: E0208 23:30:26.564320 2119 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": not found" containerID="c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed" Feb 8 23:30:26.564364 kubelet[2119]: I0208 23:30:26.564357 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed} err="failed to get container status \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": not found" Feb 8 23:30:26.603857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33-rootfs.mount: Deactivated successfully. Feb 8 23:30:26.604020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd-rootfs.mount: Deactivated successfully. Feb 8 23:30:26.604112 systemd[1]: var-lib-kubelet-pods-844930d1\x2d9ca9\x2d46f8\x2d8c6c\x2d39ad3eada113-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfs49n.mount: Deactivated successfully. Feb 8 23:30:26.604223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c-rootfs.mount: Deactivated successfully. Feb 8 23:30:26.604311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c-shm.mount: Deactivated successfully. Feb 8 23:30:26.604394 systemd[1]: var-lib-kubelet-pods-a4cff64a\x2de3ae\x2d441e\x2d8945\x2d9c14e1d55415-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9r95.mount: Deactivated successfully. Feb 8 23:30:26.604474 systemd[1]: var-lib-kubelet-pods-a4cff64a\x2de3ae\x2d441e\x2d8945\x2d9c14e1d55415-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:30:26.604553 systemd[1]: var-lib-kubelet-pods-a4cff64a\x2de3ae\x2d441e\x2d8945\x2d9c14e1d55415-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:30:27.342516 env[1215]: time="2024-02-08T23:30:27.342464553Z" level=info msg="StopContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" with timeout 1 (s)" Feb 8 23:30:27.342905 env[1215]: time="2024-02-08T23:30:27.342515890Z" level=error msg="StopContainer for \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": not found" Feb 8 23:30:27.342905 env[1215]: time="2024-02-08T23:30:27.342527272Z" level=info msg="StopContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" with timeout 1 (s)" Feb 8 23:30:27.342905 env[1215]: time="2024-02-08T23:30:27.342582597Z" level=error msg="StopContainer for \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": not found" Feb 8 23:30:27.343132 kubelet[2119]: E0208 23:30:27.343097 2119 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33\": not found" containerID="96129edd71d4e7ed515a1057577a18c6e7de9ba7b495261776bc1a3e7ae72a33" Feb 8 23:30:27.343375 kubelet[2119]: E0208 23:30:27.343355 2119 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed\": not found" containerID="c3a0e53182b4b543fb7403f8e63a321673bdcd9bdf6fb35ed644cb92b2a1a3ed" Feb 8 23:30:27.343426 env[1215]: time="2024-02-08T23:30:27.343386703Z" level=info msg="StopPodSandbox for \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\"" Feb 8 23:30:27.343501 env[1215]: time="2024-02-08T23:30:27.343461745Z" level=info msg="TearDown network for sandbox \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" successfully" Feb 8 23:30:27.343501 env[1215]: time="2024-02-08T23:30:27.343496371Z" level=info msg="StopPodSandbox for \"a843c899b40cd2dc4b1540cc8efd77963c7f8f9dbecc547ce321399692ce228c\" returns successfully" Feb 8 23:30:27.343621 env[1215]: time="2024-02-08T23:30:27.343571183Z" level=info msg="StopPodSandbox for \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\"" Feb 8 23:30:27.343621 env[1215]: time="2024-02-08T23:30:27.343608073Z" level=info msg="TearDown network for sandbox \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" successfully" Feb 8 23:30:27.343680 env[1215]: time="2024-02-08T23:30:27.343625766Z" level=info msg="StopPodSandbox for \"06ef94859ca9c51fe9763dd27585e5394e7aac247192282df2088ca8cceb64dd\" returns successfully" Feb 8 23:30:27.344531 kubelet[2119]: I0208 23:30:27.344511 2119 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=844930d1-9ca9-46f8-8c6c-39ad3eada113 path="/var/lib/kubelet/pods/844930d1-9ca9-46f8-8c6c-39ad3eada113/volumes" Feb 8 23:30:27.344892 kubelet[2119]: I0208 23:30:27.344877 2119 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a4cff64a-e3ae-441e-8945-9c14e1d55415 path="/var/lib/kubelet/pods/a4cff64a-e3ae-441e-8945-9c14e1d55415/volumes" Feb 8 23:30:27.567389 sshd[3891]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:27.569582 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:40364.service. Feb 8 23:30:27.570077 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:40362.service: Deactivated successfully. Feb 8 23:30:27.570693 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:30:27.572284 systemd-logind[1188]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:30:27.573371 systemd-logind[1188]: Removed session 24. Feb 8 23:30:27.609059 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 40364 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:27.610238 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:27.613656 systemd-logind[1188]: New session 25 of user core. Feb 8 23:30:27.614431 systemd[1]: Started session-25.scope. Feb 8 23:30:28.036371 sshd[4054]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:28.040037 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:54824.service. Feb 8 23:30:28.041123 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:40364.service: Deactivated successfully. Feb 8 23:30:28.041981 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:30:28.044469 systemd-logind[1188]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:30:28.048804 systemd-logind[1188]: Removed session 25. Feb 8 23:30:28.057809 kubelet[2119]: I0208 23:30:28.057764 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057839 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="clean-cilium-state" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057856 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="cilium-agent" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057867 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="mount-cgroup" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057883 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="apply-sysctl-overwrites" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057898 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="844930d1-9ca9-46f8-8c6c-39ad3eada113" containerName="cilium-operator" Feb 8 23:30:28.057932 kubelet[2119]: E0208 23:30:28.057908 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="mount-bpf-fs" Feb 8 23:30:28.058091 kubelet[2119]: I0208 23:30:28.057947 2119 memory_manager.go:346] "RemoveStaleState removing state" podUID="a4cff64a-e3ae-441e-8945-9c14e1d55415" containerName="cilium-agent" Feb 8 23:30:28.058091 kubelet[2119]: I0208 23:30:28.057976 2119 memory_manager.go:346] "RemoveStaleState removing state" podUID="844930d1-9ca9-46f8-8c6c-39ad3eada113" containerName="cilium-operator" Feb 8 23:30:28.112161 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 54824 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:28.113162 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:28.116526 systemd-logind[1188]: New session 26 of user core. Feb 8 23:30:28.117252 systemd[1]: Started session-26.scope. Feb 8 23:30:28.145252 kubelet[2119]: I0208 23:30:28.145213 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-lib-modules\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145370 kubelet[2119]: I0208 23:30:28.145263 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-kernel\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145370 kubelet[2119]: I0208 23:30:28.145285 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-run\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145423 kubelet[2119]: I0208 23:30:28.145408 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-bpf-maps\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145456 kubelet[2119]: I0208 23:30:28.145447 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-clustermesh-secrets\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145572 kubelet[2119]: I0208 23:30:28.145542 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-ipsec-secrets\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145638 kubelet[2119]: I0208 23:30:28.145620 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnt8\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-kube-api-access-psnt8\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145738 kubelet[2119]: I0208 23:30:28.145724 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cni-path\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145798 kubelet[2119]: I0208 23:30:28.145762 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hostproc\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145798 kubelet[2119]: I0208 23:30:28.145781 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-xtables-lock\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145853 kubelet[2119]: I0208 23:30:28.145819 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-config-path\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145880 kubelet[2119]: I0208 23:30:28.145860 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-cgroup\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145917 kubelet[2119]: I0208 23:30:28.145904 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-etc-cni-netd\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145949 kubelet[2119]: I0208 23:30:28.145940 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-net\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.145995 kubelet[2119]: I0208 23:30:28.145962 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hubble-tls\") pod \"cilium-cskmv\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " pod="kube-system/cilium-cskmv" Feb 8 23:30:28.229051 sshd[4067]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:28.232563 systemd[1]: Started sshd@26-10.0.0.126:22-10.0.0.1:54830.service. Feb 8 23:30:28.233024 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:54824.service: Deactivated successfully. Feb 8 23:30:28.234701 systemd-logind[1188]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:30:28.234814 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:30:28.239625 systemd-logind[1188]: Removed session 26. Feb 8 23:30:28.276688 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:30:28.277781 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:28.281129 systemd-logind[1188]: New session 27 of user core. Feb 8 23:30:28.282054 systemd[1]: Started session-27.scope. Feb 8 23:30:28.342476 kubelet[2119]: E0208 23:30:28.342371 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:28.376282 kubelet[2119]: E0208 23:30:28.376238 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:28.376932 env[1215]: time="2024-02-08T23:30:28.376894910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cskmv,Uid:f4b4cee9-2919-4c24-9d3d-7fb964f8be40,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:28.392090 env[1215]: time="2024-02-08T23:30:28.392010744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:28.392090 env[1215]: time="2024-02-08T23:30:28.392051241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:28.392090 env[1215]: time="2024-02-08T23:30:28.392063204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:28.392290 env[1215]: time="2024-02-08T23:30:28.392230100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3 pid=4105 runtime=io.containerd.runc.v2 Feb 8 23:30:28.426490 env[1215]: time="2024-02-08T23:30:28.426442440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cskmv,Uid:f4b4cee9-2919-4c24-9d3d-7fb964f8be40,Namespace:kube-system,Attempt:0,} returns sandbox id \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\"" Feb 8 23:30:28.428008 kubelet[2119]: E0208 23:30:28.427484 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:28.437071 env[1215]: time="2024-02-08T23:30:28.437023462Z" level=info msg="CreateContainer within sandbox \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:30:28.447857 env[1215]: time="2024-02-08T23:30:28.447820324Z" level=info msg="CreateContainer within sandbox \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\"" Feb 8 23:30:28.448459 env[1215]: time="2024-02-08T23:30:28.448437864Z" level=info msg="StartContainer for \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\"" Feb 8 23:30:28.486752 env[1215]: time="2024-02-08T23:30:28.486703233Z" level=info msg="StartContainer for \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\" returns successfully" Feb 8 23:30:28.520070 env[1215]: time="2024-02-08T23:30:28.520010055Z" level=info msg="shim disconnected" id=e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc Feb 8 23:30:28.520287 env[1215]: time="2024-02-08T23:30:28.520109684Z" level=warning msg="cleaning up after shim disconnected" id=e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc namespace=k8s.io Feb 8 23:30:28.520287 env[1215]: time="2024-02-08T23:30:28.520130583Z" level=info msg="cleaning up dead shim" Feb 8 23:30:28.527208 env[1215]: time="2024-02-08T23:30:28.527156672Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4187 runtime=io.containerd.runc.v2\n" Feb 8 23:30:28.537819 env[1215]: time="2024-02-08T23:30:28.537782439Z" level=info msg="StopPodSandbox for \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\"" Feb 8 23:30:28.537944 env[1215]: time="2024-02-08T23:30:28.537834918Z" level=info msg="Container to stop \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:28.560763 env[1215]: time="2024-02-08T23:30:28.560708239Z" level=info msg="shim disconnected" id=32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3 Feb 8 23:30:28.560933 env[1215]: time="2024-02-08T23:30:28.560765096Z" level=warning msg="cleaning up after shim disconnected" id=32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3 namespace=k8s.io Feb 8 23:30:28.560933 env[1215]: time="2024-02-08T23:30:28.560776798Z" level=info msg="cleaning up dead shim" Feb 8 23:30:28.566795 env[1215]: time="2024-02-08T23:30:28.566741092Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4219 runtime=io.containerd.runc.v2\n" Feb 8 23:30:28.567104 env[1215]: time="2024-02-08T23:30:28.567074515Z" level=info msg="TearDown network for sandbox \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\" successfully" Feb 8 23:30:28.567138 env[1215]: time="2024-02-08T23:30:28.567105603Z" level=info msg="StopPodSandbox for \"32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3\" returns successfully" Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649479 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-run\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649526 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-xtables-lock\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649549 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-etc-cni-netd\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649572 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-kernel\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649596 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cni-path\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650128 kubelet[2119]: I0208 23:30:28.649587 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650391 kubelet[2119]: I0208 23:30:28.649592 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650391 kubelet[2119]: I0208 23:30:28.649618 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-bpf-maps\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650391 kubelet[2119]: I0208 23:30:28.649630 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650391 kubelet[2119]: I0208 23:30:28.649631 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650391 kubelet[2119]: I0208 23:30:28.649642 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-lib-modules\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650504 kubelet[2119]: I0208 23:30:28.649648 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650504 kubelet[2119]: I0208 23:30:28.649652 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650504 kubelet[2119]: I0208 23:30:28.649665 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-cgroup\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650504 kubelet[2119]: I0208 23:30:28.649664 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650504 kubelet[2119]: I0208 23:30:28.649700 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-net\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650616 kubelet[2119]: I0208 23:30:28.649715 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650616 kubelet[2119]: I0208 23:30:28.649730 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-clustermesh-secrets\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650616 kubelet[2119]: I0208 23:30:28.649742 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.650616 kubelet[2119]: I0208 23:30:28.649757 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-ipsec-secrets\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650616 kubelet[2119]: I0208 23:30:28.649785 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnt8\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-kube-api-access-psnt8\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649808 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hubble-tls\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649831 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hostproc\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649855 2119 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-config-path\") pod \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\" (UID: \"f4b4cee9-2919-4c24-9d3d-7fb964f8be40\") " Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649896 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649911 2119 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649924 2119 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650742 kubelet[2119]: I0208 23:30:28.649938 2119 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.649954 2119 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.650028 2119 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.650043 2119 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.650064 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.650078 2119 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.650891 kubelet[2119]: W0208 23:30:28.650224 2119 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f4b4cee9-2919-4c24-9d3d-7fb964f8be40/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:30:28.650891 kubelet[2119]: I0208 23:30:28.650403 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:28.652537 kubelet[2119]: I0208 23:30:28.652489 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:30:28.652748 kubelet[2119]: I0208 23:30:28.652713 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:28.652748 kubelet[2119]: I0208 23:30:28.652729 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-kube-api-access-psnt8" (OuterVolumeSpecName: "kube-api-access-psnt8") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "kube-api-access-psnt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:28.652873 kubelet[2119]: I0208 23:30:28.652755 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:30:28.654402 kubelet[2119]: I0208 23:30:28.654378 2119 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f4b4cee9-2919-4c24-9d3d-7fb964f8be40" (UID: "f4b4cee9-2919-4c24-9d3d-7fb964f8be40"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:30:28.750718 kubelet[2119]: I0208 23:30:28.750676 2119 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.750718 kubelet[2119]: I0208 23:30:28.750714 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.750718 kubelet[2119]: I0208 23:30:28.750726 2119 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-psnt8\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-kube-api-access-psnt8\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.750718 kubelet[2119]: I0208 23:30:28.750737 2119 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.750990 kubelet[2119]: I0208 23:30:28.750745 2119 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:28.750990 kubelet[2119]: I0208 23:30:28.750755 2119 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b4cee9-2919-4c24-9d3d-7fb964f8be40-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 8 23:30:29.251339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3-rootfs.mount: Deactivated successfully. Feb 8 23:30:29.251511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32896002606c03422419cb8e9bbc5fb0893335abd4da4087e7a518edc4f243e3-shm.mount: Deactivated successfully. Feb 8 23:30:29.251594 systemd[1]: var-lib-kubelet-pods-f4b4cee9\x2d2919\x2d4c24\x2d9d3d\x2d7fb964f8be40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsnt8.mount: Deactivated successfully. Feb 8 23:30:29.251672 systemd[1]: var-lib-kubelet-pods-f4b4cee9\x2d2919\x2d4c24\x2d9d3d\x2d7fb964f8be40-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:30:29.251797 systemd[1]: var-lib-kubelet-pods-f4b4cee9\x2d2919\x2d4c24\x2d9d3d\x2d7fb964f8be40-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:30:29.251877 systemd[1]: var-lib-kubelet-pods-f4b4cee9\x2d2919\x2d4c24\x2d9d3d\x2d7fb964f8be40-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:30:29.542318 kubelet[2119]: I0208 23:30:29.541179 2119 scope.go:115] "RemoveContainer" containerID="e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc" Feb 8 23:30:29.543835 env[1215]: time="2024-02-08T23:30:29.543006404Z" level=info msg="RemoveContainer for \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\"" Feb 8 23:30:29.557941 env[1215]: time="2024-02-08T23:30:29.557888598Z" level=info msg="RemoveContainer for \"e13f9280bea0d38e5273915a26f136af2a39c96f86d57d29c6f39869b87f7fcc\" returns successfully" Feb 8 23:30:29.584252 kubelet[2119]: I0208 23:30:29.584214 2119 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:30:29.584448 kubelet[2119]: E0208 23:30:29.584269 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4b4cee9-2919-4c24-9d3d-7fb964f8be40" containerName="mount-cgroup" Feb 8 23:30:29.584448 kubelet[2119]: I0208 23:30:29.584295 2119 memory_manager.go:346] "RemoveStaleState removing state" podUID="f4b4cee9-2919-4c24-9d3d-7fb964f8be40" containerName="mount-cgroup" Feb 8 23:30:29.656367 kubelet[2119]: I0208 23:30:29.656308 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-cilium-ipsec-secrets\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656367 kubelet[2119]: I0208 23:30:29.656366 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-hubble-tls\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656394 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-bpf-maps\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656418 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-cni-path\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656447 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-lib-modules\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656473 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-host-proc-sys-kernel\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656500 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8z9\" (UniqueName: \"kubernetes.io/projected/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-kube-api-access-qm8z9\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656585 kubelet[2119]: I0208 23:30:29.656530 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-cilium-cgroup\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656554 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-cilium-config-path\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656578 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-hostproc\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656603 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-clustermesh-secrets\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656631 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-host-proc-sys-net\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656655 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-cilium-run\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656762 kubelet[2119]: I0208 23:30:29.656676 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-etc-cni-netd\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:29.656902 kubelet[2119]: I0208 23:30:29.656711 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b617be4-c852-4bb6-b8bd-14dbda37f1f7-xtables-lock\") pod \"cilium-dfxsc\" (UID: \"0b617be4-c852-4bb6-b8bd-14dbda37f1f7\") " pod="kube-system/cilium-dfxsc" Feb 8 23:30:30.187878 kubelet[2119]: E0208 23:30:30.187833 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:30.188416 env[1215]: time="2024-02-08T23:30:30.188377897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfxsc,Uid:0b617be4-c852-4bb6-b8bd-14dbda37f1f7,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:30.200356 env[1215]: time="2024-02-08T23:30:30.200300743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:30.200356 env[1215]: time="2024-02-08T23:30:30.200339617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:30.200569 env[1215]: time="2024-02-08T23:30:30.200357069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:30.200611 env[1215]: time="2024-02-08T23:30:30.200537963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958 pid=4247 runtime=io.containerd.runc.v2 Feb 8 23:30:30.230053 env[1215]: time="2024-02-08T23:30:30.230004560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfxsc,Uid:0b617be4-c852-4bb6-b8bd-14dbda37f1f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\"" Feb 8 23:30:30.230823 kubelet[2119]: E0208 23:30:30.230650 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:30.232506 env[1215]: time="2024-02-08T23:30:30.232474333Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:30:30.243202 env[1215]: time="2024-02-08T23:30:30.243157437Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc\"" Feb 8 23:30:30.244042 env[1215]: time="2024-02-08T23:30:30.244004193Z" level=info msg="StartContainer for \"4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc\"" Feb 8 23:30:30.286537 env[1215]: time="2024-02-08T23:30:30.286476229Z" level=info msg="StartContainer for \"4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc\" returns successfully" Feb 8 23:30:30.304579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc-rootfs.mount: Deactivated successfully. Feb 8 23:30:30.311992 env[1215]: time="2024-02-08T23:30:30.311927504Z" level=info msg="shim disconnected" id=4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc Feb 8 23:30:30.312084 env[1215]: time="2024-02-08T23:30:30.312003077Z" level=warning msg="cleaning up after shim disconnected" id=4e7632060c492391611762d386c6d19564526d21082dc906d5527ec6e69050dc namespace=k8s.io Feb 8 23:30:30.312084 env[1215]: time="2024-02-08T23:30:30.312015901Z" level=info msg="cleaning up dead shim" Feb 8 23:30:30.318273 env[1215]: time="2024-02-08T23:30:30.318246525Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4330 runtime=io.containerd.runc.v2\n" Feb 8 23:30:30.545394 kubelet[2119]: E0208 23:30:30.545366 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:30.548691 env[1215]: time="2024-02-08T23:30:30.548641808Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:30:30.620782 env[1215]: time="2024-02-08T23:30:30.620730075Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16f00a8deb1604dbb91d84b2ce38cab2f92cfe89c45911fed25451cdd990ff5a\"" Feb 8 23:30:30.621618 env[1215]: time="2024-02-08T23:30:30.621592029Z" level=info msg="StartContainer for \"16f00a8deb1604dbb91d84b2ce38cab2f92cfe89c45911fed25451cdd990ff5a\"" Feb 8 23:30:30.660532 env[1215]: time="2024-02-08T23:30:30.660490008Z" level=info msg="StartContainer for \"16f00a8deb1604dbb91d84b2ce38cab2f92cfe89c45911fed25451cdd990ff5a\" returns successfully" Feb 8 23:30:30.681778 env[1215]: time="2024-02-08T23:30:30.681721092Z" level=info msg="shim disconnected" id=16f00a8deb1604dbb91d84b2ce38cab2f92cfe89c45911fed25451cdd990ff5a Feb 8 23:30:30.682003 env[1215]: time="2024-02-08T23:30:30.681783701Z" level=warning msg="cleaning up after shim disconnected" id=16f00a8deb1604dbb91d84b2ce38cab2f92cfe89c45911fed25451cdd990ff5a namespace=k8s.io Feb 8 23:30:30.682003 env[1215]: time="2024-02-08T23:30:30.681797627Z" level=info msg="cleaning up dead shim" Feb 8 23:30:30.687542 env[1215]: time="2024-02-08T23:30:30.687498667Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4391 runtime=io.containerd.runc.v2\n" Feb 8 23:30:31.341694 kubelet[2119]: E0208 23:30:31.341627 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-kdgmd" podUID=6093a5ae-c2e9-44a5-8fb6-65151ed8a89b Feb 8 23:30:31.344615 kubelet[2119]: I0208 23:30:31.344573 2119 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f4b4cee9-2919-4c24-9d3d-7fb964f8be40 path="/var/lib/kubelet/pods/f4b4cee9-2919-4c24-9d3d-7fb964f8be40/volumes" Feb 8 23:30:31.381175 kubelet[2119]: E0208 23:30:31.381156 2119 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:30:31.547812 kubelet[2119]: E0208 23:30:31.547777 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:31.550503 env[1215]: time="2024-02-08T23:30:31.550446394Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:30:31.565796 env[1215]: time="2024-02-08T23:30:31.565748368Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b\"" Feb 8 23:30:31.566276 env[1215]: time="2024-02-08T23:30:31.566246843Z" level=info msg="StartContainer for \"ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b\"" Feb 8 23:30:31.613652 env[1215]: time="2024-02-08T23:30:31.613534416Z" level=info msg="StartContainer for \"ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b\" returns successfully" Feb 8 23:30:31.641871 env[1215]: time="2024-02-08T23:30:31.641815543Z" level=info msg="shim disconnected" id=ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b Feb 8 23:30:31.641871 env[1215]: time="2024-02-08T23:30:31.641870868Z" level=warning msg="cleaning up after shim disconnected" id=ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b namespace=k8s.io Feb 8 23:30:31.641871 env[1215]: time="2024-02-08T23:30:31.641886268Z" level=info msg="cleaning up dead shim" Feb 8 23:30:31.648902 env[1215]: time="2024-02-08T23:30:31.648846221Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4447 runtime=io.containerd.runc.v2\n" Feb 8 23:30:32.251478 systemd[1]: run-containerd-runc-k8s.io-ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b-runc.ozD9sV.mount: Deactivated successfully. Feb 8 23:30:32.251619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac789dbc3acd69a6c0ab2fd88103939216aea37226cbea05ada8533f81e5cc7b-rootfs.mount: Deactivated successfully. Feb 8 23:30:32.341458 kubelet[2119]: E0208 23:30:32.341416 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-55g7g" podUID=9afa8525-1fa1-414d-9eac-3aa21f6e337b Feb 8 23:30:32.550765 kubelet[2119]: E0208 23:30:32.550622 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:32.553428 env[1215]: time="2024-02-08T23:30:32.553388986Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:30:32.566000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902407311.mount: Deactivated successfully. Feb 8 23:30:32.571021 env[1215]: time="2024-02-08T23:30:32.570983910Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9adb516f02dde3dca54b5bb7589daed1bbab336a1fe1b52b5ada40d284dfeb8e\"" Feb 8 23:30:32.573177 env[1215]: time="2024-02-08T23:30:32.573143993Z" level=info msg="StartContainer for \"9adb516f02dde3dca54b5bb7589daed1bbab336a1fe1b52b5ada40d284dfeb8e\"" Feb 8 23:30:32.607600 env[1215]: time="2024-02-08T23:30:32.607543501Z" level=info msg="StartContainer for \"9adb516f02dde3dca54b5bb7589daed1bbab336a1fe1b52b5ada40d284dfeb8e\" returns successfully" Feb 8 23:30:32.623876 env[1215]: time="2024-02-08T23:30:32.623819424Z" level=info msg="shim disconnected" id=9adb516f02dde3dca54b5bb7589daed1bbab336a1fe1b52b5ada40d284dfeb8e Feb 8 23:30:32.623876 env[1215]: time="2024-02-08T23:30:32.623874198Z" level=warning msg="cleaning up after shim disconnected" id=9adb516f02dde3dca54b5bb7589daed1bbab336a1fe1b52b5ada40d284dfeb8e namespace=k8s.io Feb 8 23:30:32.623876 env[1215]: time="2024-02-08T23:30:32.623885500Z" level=info msg="cleaning up dead shim" Feb 8 23:30:32.636502 env[1215]: time="2024-02-08T23:30:32.636442807Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4502 runtime=io.containerd.runc.v2\n" Feb 8 23:30:33.341436 kubelet[2119]: E0208 23:30:33.341398 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-kdgmd" podUID=6093a5ae-c2e9-44a5-8fb6-65151ed8a89b Feb 8 23:30:33.553495 kubelet[2119]: E0208 23:30:33.553468 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:33.555716 env[1215]: time="2024-02-08T23:30:33.555684016Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:30:33.571509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726097487.mount: Deactivated successfully. Feb 8 23:30:33.572273 env[1215]: time="2024-02-08T23:30:33.572235685Z" level=info msg="CreateContainer within sandbox \"0b424237bd43a697773912018330f54b35245b1225643be32d961271a8c58958\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2046ec589af519c4c765516cc0a64268039bc5117e54a331b9483e32b07b2d76\"" Feb 8 23:30:33.572711 env[1215]: time="2024-02-08T23:30:33.572684716Z" level=info msg="StartContainer for \"2046ec589af519c4c765516cc0a64268039bc5117e54a331b9483e32b07b2d76\"" Feb 8 23:30:33.621283 env[1215]: time="2024-02-08T23:30:33.617403681Z" level=info msg="StartContainer for \"2046ec589af519c4c765516cc0a64268039bc5117e54a331b9483e32b07b2d76\" returns successfully" Feb 8 23:30:33.848996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:30:34.065345 kubelet[2119]: I0208 23:30:34.065310 2119 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:30:34.065246602 +0000 UTC m=+102.857358583 LastTransitionTime:2024-02-08 23:30:34.065246602 +0000 UTC m=+102.857358583 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:30:34.341788 kubelet[2119]: E0208 23:30:34.341659 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-55g7g" podUID=9afa8525-1fa1-414d-9eac-3aa21f6e337b Feb 8 23:30:34.557314 kubelet[2119]: E0208 23:30:34.557288 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:34.567148 kubelet[2119]: I0208 23:30:34.567108 2119 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dfxsc" podStartSLOduration=5.5670828 pod.CreationTimestamp="2024-02-08 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:34.566743829 +0000 UTC m=+103.358855809" watchObservedRunningTime="2024-02-08 23:30:34.5670828 +0000 UTC m=+103.359194780" Feb 8 23:30:35.341475 kubelet[2119]: E0208 23:30:35.341430 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-kdgmd" podUID=6093a5ae-c2e9-44a5-8fb6-65151ed8a89b Feb 8 23:30:35.559386 kubelet[2119]: E0208 23:30:35.559356 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:36.247056 systemd-networkd[1077]: lxc_health: Link UP Feb 8 23:30:36.252556 systemd-networkd[1077]: lxc_health: Gained carrier Feb 8 23:30:36.252991 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:30:36.342093 kubelet[2119]: E0208 23:30:36.342058 2119 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-55g7g" podUID=9afa8525-1fa1-414d-9eac-3aa21f6e337b Feb 8 23:30:36.563051 kubelet[2119]: E0208 23:30:36.561504 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:37.341864 kubelet[2119]: E0208 23:30:37.341824 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:37.694167 systemd-networkd[1077]: lxc_health: Gained IPv6LL Feb 8 23:30:38.189713 kubelet[2119]: E0208 23:30:38.189680 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:38.342425 kubelet[2119]: E0208 23:30:38.342389 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:38.564288 kubelet[2119]: E0208 23:30:38.564263 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:39.565795 kubelet[2119]: E0208 23:30:39.565761 2119 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:30:42.864449 sshd[4081]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:42.866686 systemd[1]: sshd@26-10.0.0.126:22-10.0.0.1:54830.service: Deactivated successfully. Feb 8 23:30:42.867570 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:30:42.867576 systemd-logind[1188]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:30:42.868349 systemd-logind[1188]: Removed session 27.