Feb 8 23:26:22.798606 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:26:22.798632 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:22.798643 kernel: BIOS-provided physical RAM map: Feb 8 23:26:22.798651 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:26:22.798658 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:26:22.798666 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:26:22.798675 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 8 23:26:22.798683 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 8 23:26:22.798692 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:26:22.798700 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:26:22.798708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 8 23:26:22.798715 kernel: NX (Execute Disable) protection: active Feb 8 23:26:22.798723 kernel: SMBIOS 2.8 present. Feb 8 23:26:22.798731 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 8 23:26:22.798743 kernel: Hypervisor detected: KVM Feb 8 23:26:22.798751 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:26:22.798759 kernel: kvm-clock: cpu 0, msr 8ffaa001, primary cpu clock Feb 8 23:26:22.798768 kernel: kvm-clock: using sched offset of 2235347677 cycles Feb 8 23:26:22.798777 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:26:22.798785 kernel: tsc: Detected 2794.750 MHz processor Feb 8 23:26:22.798794 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:26:22.798803 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:26:22.798812 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 8 23:26:22.798822 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:26:22.798831 kernel: Using GB pages for direct mapping Feb 8 23:26:22.798839 kernel: ACPI: Early table checksum verification disabled Feb 8 23:26:22.798847 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 8 23:26:22.798856 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798865 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798873 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798882 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 8 23:26:22.798891 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798901 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798909 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:22.798918 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 8 23:26:22.798927 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 8 23:26:22.798935 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 8 23:26:22.798944 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 8 23:26:22.798952 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 8 23:26:22.798961 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 8 23:26:22.798994 kernel: No NUMA configuration found Feb 8 23:26:22.799004 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 8 23:26:22.799013 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 8 23:26:22.799023 kernel: Zone ranges: Feb 8 23:26:22.799032 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:26:22.799041 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 8 23:26:22.799058 kernel: Normal empty Feb 8 23:26:22.799067 kernel: Movable zone start for each node Feb 8 23:26:22.799076 kernel: Early memory node ranges Feb 8 23:26:22.799086 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:26:22.799095 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 8 23:26:22.799104 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 8 23:26:22.799113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:26:22.799122 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:26:22.799131 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 8 23:26:22.799142 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:26:22.799151 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:26:22.799160 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:26:22.799170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:26:22.799179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:26:22.799188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:26:22.799197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:26:22.799206 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:26:22.799215 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:26:22.799226 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 8 23:26:22.799235 kernel: TSC deadline timer available Feb 8 23:26:22.799244 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 8 23:26:22.799253 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 8 23:26:22.799262 kernel: kvm-guest: setup PV sched yield Feb 8 23:26:22.799272 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 8 23:26:22.799281 kernel: Booting paravirtualized kernel on KVM Feb 8 23:26:22.799290 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:26:22.799299 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 8 23:26:22.799310 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 8 23:26:22.799319 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 8 23:26:22.799328 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 8 23:26:22.799337 kernel: kvm-guest: setup async PF for cpu 0 Feb 8 23:26:22.799346 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 8 23:26:22.799355 kernel: kvm-guest: PV spinlocks enabled Feb 8 23:26:22.799364 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:26:22.799373 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 8 23:26:22.799382 kernel: Policy zone: DMA32 Feb 8 23:26:22.799393 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:22.799404 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:26:22.799413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:26:22.799423 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:26:22.799432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:26:22.799441 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 8 23:26:22.799451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 8 23:26:22.799460 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:26:22.799469 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:26:22.799480 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:26:22.799490 kernel: rcu: RCU event tracing is enabled. Feb 8 23:26:22.799499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 8 23:26:22.799508 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:26:22.799517 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:26:22.799526 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:26:22.799536 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 8 23:26:22.799545 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 8 23:26:22.799554 kernel: random: crng init done Feb 8 23:26:22.799564 kernel: Console: colour VGA+ 80x25 Feb 8 23:26:22.799573 kernel: printk: console [ttyS0] enabled Feb 8 23:26:22.799582 kernel: ACPI: Core revision 20210730 Feb 8 23:26:22.799592 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 8 23:26:22.799601 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:26:22.799610 kernel: x2apic enabled Feb 8 23:26:22.799619 kernel: Switched APIC routing to physical x2apic. Feb 8 23:26:22.799628 kernel: kvm-guest: setup PV IPIs Feb 8 23:26:22.799637 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:26:22.799648 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:26:22.799657 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 8 23:26:22.799667 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 8 23:26:22.799676 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 8 23:26:22.799685 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 8 23:26:22.799694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:26:22.799703 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:26:22.799712 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:26:22.799721 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:26:22.799738 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 8 23:26:22.799747 kernel: RETBleed: Mitigation: untrained return thunk Feb 8 23:26:22.799757 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 8 23:26:22.799768 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 8 23:26:22.799778 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:26:22.799788 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:26:22.799797 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:26:22.799807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:26:22.799817 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 8 23:26:22.799828 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:26:22.799838 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:26:22.799847 kernel: LSM: Security Framework initializing Feb 8 23:26:22.799857 kernel: SELinux: Initializing. Feb 8 23:26:22.799866 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:26:22.799876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 8 23:26:22.799886 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 8 23:26:22.799898 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 8 23:26:22.799907 kernel: ... version: 0 Feb 8 23:26:22.799917 kernel: ... bit width: 48 Feb 8 23:26:22.799926 kernel: ... generic registers: 6 Feb 8 23:26:22.799936 kernel: ... value mask: 0000ffffffffffff Feb 8 23:26:22.799945 kernel: ... max period: 00007fffffffffff Feb 8 23:26:22.799955 kernel: ... fixed-purpose events: 0 Feb 8 23:26:22.799965 kernel: ... event mask: 000000000000003f Feb 8 23:26:22.799994 kernel: signal: max sigframe size: 1776 Feb 8 23:26:22.800016 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:26:22.800025 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:26:22.800035 kernel: x86: Booting SMP configuration: Feb 8 23:26:22.800051 kernel: .... node #0, CPUs: #1 Feb 8 23:26:22.800061 kernel: kvm-clock: cpu 1, msr 8ffaa041, secondary cpu clock Feb 8 23:26:22.800071 kernel: kvm-guest: setup async PF for cpu 1 Feb 8 23:26:22.800080 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 8 23:26:22.800089 kernel: #2 Feb 8 23:26:22.800099 kernel: kvm-clock: cpu 2, msr 8ffaa081, secondary cpu clock Feb 8 23:26:22.800109 kernel: kvm-guest: setup async PF for cpu 2 Feb 8 23:26:22.800120 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 8 23:26:22.800129 kernel: #3 Feb 8 23:26:22.800139 kernel: kvm-clock: cpu 3, msr 8ffaa0c1, secondary cpu clock Feb 8 23:26:22.800148 kernel: kvm-guest: setup async PF for cpu 3 Feb 8 23:26:22.800157 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 8 23:26:22.800167 kernel: smp: Brought up 1 node, 4 CPUs Feb 8 23:26:22.800177 kernel: smpboot: Max logical packages: 1 Feb 8 23:26:22.800186 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 8 23:26:22.800196 kernel: devtmpfs: initialized Feb 8 23:26:22.800207 kernel: x86/mm: Memory block size: 128MB Feb 8 23:26:22.800217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:26:22.800227 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 8 23:26:22.800236 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:26:22.800246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:26:22.800255 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:26:22.800265 kernel: audit: type=2000 audit(1707434782.320:1): state=initialized audit_enabled=0 res=1 Feb 8 23:26:22.800275 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:26:22.800284 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:26:22.800296 kernel: cpuidle: using governor menu Feb 8 23:26:22.800305 kernel: ACPI: bus type PCI registered Feb 8 23:26:22.800315 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:26:22.800324 kernel: dca service started, version 1.12.1 Feb 8 23:26:22.800334 kernel: PCI: Using configuration type 1 for base access Feb 8 23:26:22.800343 kernel: PCI: Using configuration type 1 for extended access Feb 8 23:26:22.800353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:26:22.800363 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:26:22.800372 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:26:22.800384 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:26:22.800393 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:26:22.800402 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:26:22.800412 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:26:22.800422 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:26:22.800431 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:26:22.800442 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:26:22.800454 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:26:22.800466 kernel: ACPI: Interpreter enabled Feb 8 23:26:22.800480 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:26:22.800491 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:26:22.800504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:26:22.800516 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:26:22.800528 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:26:22.800709 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:26:22.800726 kernel: acpiphp: Slot [3] registered Feb 8 23:26:22.800736 kernel: acpiphp: Slot [4] registered Feb 8 23:26:22.800747 kernel: acpiphp: Slot [5] registered Feb 8 23:26:22.800757 kernel: acpiphp: Slot [6] registered Feb 8 23:26:22.800766 kernel: acpiphp: Slot [7] registered Feb 8 23:26:22.800776 kernel: acpiphp: Slot [8] registered Feb 8 23:26:22.800785 kernel: acpiphp: Slot [9] registered Feb 8 23:26:22.800795 kernel: acpiphp: Slot [10] registered Feb 8 23:26:22.800804 kernel: acpiphp: Slot [11] registered Feb 8 23:26:22.800814 kernel: acpiphp: Slot [12] registered Feb 8 23:26:22.800824 kernel: acpiphp: Slot [13] registered Feb 8 23:26:22.800833 kernel: acpiphp: Slot [14] registered Feb 8 23:26:22.800844 kernel: acpiphp: Slot [15] registered Feb 8 23:26:22.800854 kernel: acpiphp: Slot [16] registered Feb 8 23:26:22.800863 kernel: acpiphp: Slot [17] registered Feb 8 23:26:22.800873 kernel: acpiphp: Slot [18] registered Feb 8 23:26:22.800882 kernel: acpiphp: Slot [19] registered Feb 8 23:26:22.800891 kernel: acpiphp: Slot [20] registered Feb 8 23:26:22.800901 kernel: acpiphp: Slot [21] registered Feb 8 23:26:22.800910 kernel: acpiphp: Slot [22] registered Feb 8 23:26:22.800920 kernel: acpiphp: Slot [23] registered Feb 8 23:26:22.800931 kernel: acpiphp: Slot [24] registered Feb 8 23:26:22.800940 kernel: acpiphp: Slot [25] registered Feb 8 23:26:22.800949 kernel: acpiphp: Slot [26] registered Feb 8 23:26:22.800959 kernel: acpiphp: Slot [27] registered Feb 8 23:26:22.800978 kernel: acpiphp: Slot [28] registered Feb 8 23:26:22.800989 kernel: acpiphp: Slot [29] registered Feb 8 23:26:22.800998 kernel: acpiphp: Slot [30] registered Feb 8 23:26:22.801007 kernel: acpiphp: Slot [31] registered Feb 8 23:26:22.801017 kernel: PCI host bridge to bus 0000:00 Feb 8 23:26:22.801129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:26:22.801224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:26:22.801312 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:26:22.801403 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 8 23:26:22.801510 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:26:22.801595 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:26:22.801714 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:26:22.801826 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:26:22.801948 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:26:22.802069 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 8 23:26:22.802168 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:26:22.802265 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:26:22.802362 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:26:22.802461 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:26:22.802574 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:26:22.802672 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:26:22.802768 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:26:22.802872 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 8 23:26:22.802987 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 8 23:26:22.803097 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 8 23:26:22.803198 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 8 23:26:22.803294 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:26:22.803398 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:26:22.803502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 8 23:26:22.803603 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 8 23:26:22.803703 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 8 23:26:22.803814 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:26:22.803917 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:26:22.804028 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 8 23:26:22.804134 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 8 23:26:22.804240 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:26:22.804340 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 8 23:26:22.804437 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 8 23:26:22.804535 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 8 23:26:22.804636 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 8 23:26:22.804650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:26:22.804660 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:26:22.804669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:26:22.804679 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:26:22.804689 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:26:22.804698 kernel: iommu: Default domain type: Translated Feb 8 23:26:22.804708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:26:22.804802 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:26:22.804903 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:26:22.805013 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:26:22.805027 kernel: vgaarb: loaded Feb 8 23:26:22.805037 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:26:22.805053 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:26:22.805063 kernel: PTP clock support registered Feb 8 23:26:22.805072 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:26:22.805082 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:26:22.805095 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:26:22.805104 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 8 23:26:22.805114 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 8 23:26:22.805123 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 8 23:26:22.805133 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:26:22.805143 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:26:22.805153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:26:22.805162 kernel: pnp: PnP ACPI init Feb 8 23:26:22.805282 kernel: pnp 00:02: [dma 2] Feb 8 23:26:22.805300 kernel: pnp: PnP ACPI: found 6 devices Feb 8 23:26:22.805310 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:26:22.805319 kernel: NET: Registered PF_INET protocol family Feb 8 23:26:22.805329 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:26:22.805339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 8 23:26:22.805349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:26:22.805358 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:26:22.805368 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 8 23:26:22.805380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 8 23:26:22.805390 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:26:22.805400 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 8 23:26:22.805410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:26:22.805420 kernel: NET: Registered PF_XDP protocol family Feb 8 23:26:22.805513 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:26:22.805599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:26:22.805685 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:26:22.805770 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 8 23:26:22.805857 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:26:22.805954 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:26:22.806071 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:26:22.806169 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:26:22.806182 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:26:22.806192 kernel: Initialise system trusted keyrings Feb 8 23:26:22.806201 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 8 23:26:22.806211 kernel: Key type asymmetric registered Feb 8 23:26:22.806223 kernel: Asymmetric key parser 'x509' registered Feb 8 23:26:22.806233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:26:22.806243 kernel: io scheduler mq-deadline registered Feb 8 23:26:22.806252 kernel: io scheduler kyber registered Feb 8 23:26:22.806262 kernel: io scheduler bfq registered Feb 8 23:26:22.806272 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:26:22.806282 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:26:22.806291 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 8 23:26:22.806301 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:26:22.806312 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:26:22.806322 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:26:22.806332 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:26:22.806341 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:26:22.806351 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:26:22.806448 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 8 23:26:22.806463 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:26:22.806564 kernel: rtc_cmos 00:05: registered as rtc0 Feb 8 23:26:22.806678 kernel: rtc_cmos 00:05: setting system clock to 2024-02-08T23:26:22 UTC (1707434782) Feb 8 23:26:22.806773 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 8 23:26:22.806787 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:26:22.806796 kernel: Segment Routing with IPv6 Feb 8 23:26:22.806806 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:26:22.806816 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:26:22.806826 kernel: Key type dns_resolver registered Feb 8 23:26:22.806835 kernel: IPI shorthand broadcast: enabled Feb 8 23:26:22.806845 kernel: sched_clock: Marking stable (365133200, 75746485)->(472324278, -31444593) Feb 8 23:26:22.806857 kernel: registered taskstats version 1 Feb 8 23:26:22.806867 kernel: Loading compiled-in X.509 certificates Feb 8 23:26:22.806877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:26:22.806886 kernel: Key type .fscrypt registered Feb 8 23:26:22.806895 kernel: Key type fscrypt-provisioning registered Feb 8 23:26:22.806905 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:26:22.806915 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:26:22.806924 kernel: ima: No architecture policies found Feb 8 23:26:22.806936 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:26:22.806945 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:26:22.806955 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:26:22.806965 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:26:22.806987 kernel: Run /init as init process Feb 8 23:26:22.806996 kernel: with arguments: Feb 8 23:26:22.807005 kernel: /init Feb 8 23:26:22.807015 kernel: with environment: Feb 8 23:26:22.807036 kernel: HOME=/ Feb 8 23:26:22.807053 kernel: TERM=linux Feb 8 23:26:22.807065 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:26:22.807077 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:26:22.807090 systemd[1]: Detected virtualization kvm. Feb 8 23:26:22.807101 systemd[1]: Detected architecture x86-64. Feb 8 23:26:22.807112 systemd[1]: Running in initrd. Feb 8 23:26:22.807122 systemd[1]: No hostname configured, using default hostname. Feb 8 23:26:22.807132 systemd[1]: Hostname set to . Feb 8 23:26:22.807145 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:26:22.807156 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:26:22.807166 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:26:22.807176 systemd[1]: Reached target cryptsetup.target. Feb 8 23:26:22.807187 systemd[1]: Reached target paths.target. Feb 8 23:26:22.807197 systemd[1]: Reached target slices.target. Feb 8 23:26:22.807207 systemd[1]: Reached target swap.target. Feb 8 23:26:22.807218 systemd[1]: Reached target timers.target. Feb 8 23:26:22.807231 systemd[1]: Listening on iscsid.socket. Feb 8 23:26:22.807242 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:26:22.807252 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:26:22.807263 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:26:22.807273 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:26:22.807284 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:26:22.807294 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:26:22.807306 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:26:22.807317 systemd[1]: Reached target sockets.target. Feb 8 23:26:22.807328 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:26:22.807338 systemd[1]: Finished network-cleanup.service. Feb 8 23:26:22.807349 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:26:22.807359 systemd[1]: Starting systemd-journald.service... Feb 8 23:26:22.807370 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:26:22.807384 systemd[1]: Starting systemd-resolved.service... Feb 8 23:26:22.807397 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:26:22.807410 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:26:22.807424 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:26:22.807437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:26:22.807450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:26:22.807464 kernel: audit: type=1130 audit(1707434782.797:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.807480 systemd-journald[197]: Journal started Feb 8 23:26:22.807541 systemd-journald[197]: Runtime Journal (/run/log/journal/89d000c8f51440778c95bc0e35e78446) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:26:22.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.792456 systemd-modules-load[198]: Inserted module 'overlay' Feb 8 23:26:22.837891 systemd[1]: Started systemd-journald.service. Feb 8 23:26:22.837908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:26:22.837921 kernel: Bridge firewalling registered Feb 8 23:26:22.837930 kernel: audit: type=1130 audit(1707434782.827:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.837940 kernel: audit: type=1130 audit(1707434782.830:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.837949 kernel: audit: type=1130 audit(1707434782.832:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.817185 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 8 23:26:22.839116 kernel: SCSI subsystem initialized Feb 8 23:26:22.823955 systemd-resolved[199]: Positive Trust Anchors: Feb 8 23:26:22.823964 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:26:22.824003 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:26:22.826100 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 8 23:26:22.828176 systemd[1]: Started systemd-resolved.service. Feb 8 23:26:22.830909 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:26:22.850725 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:26:22.850746 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:26:22.850759 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:26:22.833434 systemd[1]: Reached target nss-lookup.target. Feb 8 23:26:22.837294 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:26:22.852248 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:26:22.856765 kernel: audit: type=1130 audit(1707434782.851:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.853640 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:26:22.855874 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 8 23:26:22.858656 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:26:22.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.860832 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:26:22.863411 kernel: audit: type=1130 audit(1707434782.859:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.863436 dracut-cmdline[216]: dracut-dracut-053 Feb 8 23:26:22.864637 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:22.869499 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:26:22.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.873002 kernel: audit: type=1130 audit(1707434782.869:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.910995 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:26:22.922003 kernel: iscsi: registered transport (tcp) Feb 8 23:26:22.944130 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:26:22.944192 kernel: QLogic iSCSI HBA Driver Feb 8 23:26:22.965218 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:26:22.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:22.967078 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:26:22.969725 kernel: audit: type=1130 audit(1707434782.965:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.010991 kernel: raid6: avx2x4 gen() 29682 MB/s Feb 8 23:26:23.027991 kernel: raid6: avx2x4 xor() 7196 MB/s Feb 8 23:26:23.044985 kernel: raid6: avx2x2 gen() 31281 MB/s Feb 8 23:26:23.061996 kernel: raid6: avx2x2 xor() 18169 MB/s Feb 8 23:26:23.078991 kernel: raid6: avx2x1 gen() 26116 MB/s Feb 8 23:26:23.095990 kernel: raid6: avx2x1 xor() 14956 MB/s Feb 8 23:26:23.113001 kernel: raid6: sse2x4 gen() 13763 MB/s Feb 8 23:26:23.129995 kernel: raid6: sse2x4 xor() 6952 MB/s Feb 8 23:26:23.147015 kernel: raid6: sse2x2 gen() 14983 MB/s Feb 8 23:26:23.164001 kernel: raid6: sse2x2 xor() 9324 MB/s Feb 8 23:26:23.180999 kernel: raid6: sse2x1 gen() 11812 MB/s Feb 8 23:26:23.198449 kernel: raid6: sse2x1 xor() 7658 MB/s Feb 8 23:26:23.198478 kernel: raid6: using algorithm avx2x2 gen() 31281 MB/s Feb 8 23:26:23.198488 kernel: raid6: .... xor() 18169 MB/s, rmw enabled Feb 8 23:26:23.198507 kernel: raid6: using avx2x2 recovery algorithm Feb 8 23:26:23.209985 kernel: xor: automatically using best checksumming function avx Feb 8 23:26:23.298014 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:26:23.304966 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:26:23.308233 kernel: audit: type=1130 audit(1707434783.304:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.307000 audit: BPF prog-id=7 op=LOAD Feb 8 23:26:23.307000 audit: BPF prog-id=8 op=LOAD Feb 8 23:26:23.308546 systemd[1]: Starting systemd-udevd.service... Feb 8 23:26:23.320145 systemd-udevd[401]: Using default interface naming scheme 'v252'. Feb 8 23:26:23.324670 systemd[1]: Started systemd-udevd.service. Feb 8 23:26:23.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.325869 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:26:23.334649 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 8 23:26:23.356318 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:26:23.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.362784 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:26:23.395927 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:26:23.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:23.424992 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 8 23:26:23.427574 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:26:23.427599 kernel: GPT:9289727 != 19775487 Feb 8 23:26:23.427611 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:26:23.427623 kernel: GPT:9289727 != 19775487 Feb 8 23:26:23.429332 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:26:23.429373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:23.433323 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:26:23.438998 kernel: libata version 3.00 loaded. Feb 8 23:26:23.442267 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:26:23.444986 kernel: scsi host0: ata_piix Feb 8 23:26:23.445809 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:26:23.447094 kernel: AES CTR mode by8 optimization enabled Feb 8 23:26:23.451324 kernel: scsi host1: ata_piix Feb 8 23:26:23.451462 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 8 23:26:23.451472 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 8 23:26:23.607993 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 8 23:26:23.608996 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 8 23:26:23.623587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:26:23.625830 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Feb 8 23:26:23.629401 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:26:23.629765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:26:23.635983 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 8 23:26:23.636131 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:26:23.637557 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:26:23.640696 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:26:23.642012 systemd[1]: Starting disk-uuid.service... Feb 8 23:26:23.654001 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 8 23:26:23.773953 disk-uuid[531]: Primary Header is updated. Feb 8 23:26:23.773953 disk-uuid[531]: Secondary Entries is updated. Feb 8 23:26:23.773953 disk-uuid[531]: Secondary Header is updated. Feb 8 23:26:23.776994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:23.779988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:24.803873 disk-uuid[534]: The operation has completed successfully. Feb 8 23:26:24.805209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:24.823959 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:26:24.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:24.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:24.824059 systemd[1]: Finished disk-uuid.service. Feb 8 23:26:24.831504 systemd[1]: Starting verity-setup.service... Feb 8 23:26:24.843011 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 8 23:26:24.860714 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:26:24.862529 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:26:24.864040 systemd[1]: Finished verity-setup.service. Feb 8 23:26:24.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:24.919991 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:26:24.920151 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:26:24.920418 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:26:24.920940 systemd[1]: Starting ignition-setup.service... Feb 8 23:26:24.922656 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:26:24.928570 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:26:24.928593 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:26:24.928602 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:26:24.935729 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:26:24.981154 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:26:24.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:24.982000 audit: BPF prog-id=9 op=LOAD Feb 8 23:26:24.982791 systemd[1]: Starting systemd-networkd.service... Feb 8 23:26:25.001646 systemd-networkd[702]: lo: Link UP Feb 8 23:26:25.001655 systemd-networkd[702]: lo: Gained carrier Feb 8 23:26:25.002093 systemd-networkd[702]: Enumeration completed Feb 8 23:26:25.002155 systemd[1]: Started systemd-networkd.service. Feb 8 23:26:25.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.002260 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:26:25.007400 systemd-networkd[702]: eth0: Link UP Feb 8 23:26:25.007403 systemd-networkd[702]: eth0: Gained carrier Feb 8 23:26:25.008782 systemd[1]: Reached target network.target. Feb 8 23:26:25.010058 systemd[1]: Starting iscsiuio.service... Feb 8 23:26:25.014222 systemd[1]: Started iscsiuio.service. Feb 8 23:26:25.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.014965 systemd[1]: Starting iscsid.service... Feb 8 23:26:25.018081 iscsid[707]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:26:25.018081 iscsid[707]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:26:25.018081 iscsid[707]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:26:25.018081 iscsid[707]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:26:25.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.049234 iscsid[707]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:26:25.049234 iscsid[707]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:26:25.019378 systemd[1]: Started iscsid.service. Feb 8 23:26:25.020242 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:26:25.050035 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:26:25.057315 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:26:25.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.057848 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:26:25.058770 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:26:25.059017 systemd[1]: Reached target remote-fs.target. Feb 8 23:26:25.061674 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:26:25.067691 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:26:25.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.141505 systemd[1]: Finished ignition-setup.service. Feb 8 23:26:25.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.143805 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:26:25.178689 ignition[722]: Ignition 2.14.0 Feb 8 23:26:25.178698 ignition[722]: Stage: fetch-offline Feb 8 23:26:25.178747 ignition[722]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:25.178755 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:25.178865 ignition[722]: parsed url from cmdline: "" Feb 8 23:26:25.178869 ignition[722]: no config URL provided Feb 8 23:26:25.178875 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:26:25.178883 ignition[722]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:26:25.178910 ignition[722]: op(1): [started] loading QEMU firmware config module Feb 8 23:26:25.178915 ignition[722]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 8 23:26:25.185889 ignition[722]: op(1): [finished] loading QEMU firmware config module Feb 8 23:26:25.247040 ignition[722]: parsing config with SHA512: 876067d4a88bdda908d455c95d4e7dc319f5d397bb03879b7ff583aef4340e5e4186f659cca9b0383aface05763d24988f465b07f54959d819eb8bf2eb62ec6e Feb 8 23:26:25.279378 unknown[722]: fetched base config from "system" Feb 8 23:26:25.279393 unknown[722]: fetched user config from "qemu" Feb 8 23:26:25.280837 ignition[722]: fetch-offline: fetch-offline passed Feb 8 23:26:25.280952 ignition[722]: Ignition finished successfully Feb 8 23:26:25.283075 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:26:25.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.284463 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 8 23:26:25.285371 systemd[1]: Starting ignition-kargs.service... Feb 8 23:26:25.293618 ignition[730]: Ignition 2.14.0 Feb 8 23:26:25.293629 ignition[730]: Stage: kargs Feb 8 23:26:25.293736 ignition[730]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:25.293747 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:25.295563 ignition[730]: kargs: kargs passed Feb 8 23:26:25.295612 ignition[730]: Ignition finished successfully Feb 8 23:26:25.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.297013 systemd[1]: Finished ignition-kargs.service. Feb 8 23:26:25.298655 systemd[1]: Starting ignition-disks.service... Feb 8 23:26:25.304737 ignition[737]: Ignition 2.14.0 Feb 8 23:26:25.304748 ignition[737]: Stage: disks Feb 8 23:26:25.304858 ignition[737]: no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:25.304870 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:25.307039 systemd[1]: Finished ignition-disks.service. Feb 8 23:26:25.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.306396 ignition[737]: disks: disks passed Feb 8 23:26:25.308144 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:26:25.306442 ignition[737]: Ignition finished successfully Feb 8 23:26:25.309363 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:26:25.310065 systemd[1]: Reached target local-fs.target. Feb 8 23:26:25.310394 systemd[1]: Reached target sysinit.target. Feb 8 23:26:25.310518 systemd[1]: Reached target basic.target. Feb 8 23:26:25.311569 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:26:25.329829 systemd-fsck[745]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 8 23:26:25.550481 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:26:25.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.557129 systemd[1]: Mounting sysroot.mount... Feb 8 23:26:25.580015 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:26:25.580564 systemd[1]: Mounted sysroot.mount. Feb 8 23:26:25.580950 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:26:25.582119 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:26:25.582692 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:26:25.582725 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:26:25.582744 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:26:25.589036 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:26:25.590037 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:26:25.594522 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:26:25.597907 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:26:25.601424 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:26:25.604868 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:26:25.628501 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:26:25.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.629669 systemd[1]: Starting ignition-mount.service... Feb 8 23:26:25.630846 systemd[1]: Starting sysroot-boot.service... Feb 8 23:26:25.636395 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Feb 8 23:26:25.643661 ignition[798]: INFO : Ignition 2.14.0 Feb 8 23:26:25.644545 ignition[798]: INFO : Stage: mount Feb 8 23:26:25.645055 ignition[798]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:25.645055 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:25.647241 ignition[798]: INFO : mount: mount passed Feb 8 23:26:25.647789 ignition[798]: INFO : Ignition finished successfully Feb 8 23:26:25.648310 systemd[1]: Finished ignition-mount.service. Feb 8 23:26:25.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.650349 systemd[1]: Finished sysroot-boot.service. Feb 8 23:26:25.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:25.870545 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:26:25.876437 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 8 23:26:25.876472 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:26:25.876482 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:26:25.877992 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:26:25.880427 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:26:25.882407 systemd[1]: Starting ignition-files.service... Feb 8 23:26:25.896083 ignition[826]: INFO : Ignition 2.14.0 Feb 8 23:26:25.896083 ignition[826]: INFO : Stage: files Feb 8 23:26:25.897628 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:25.897628 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:25.897628 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:26:25.900531 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:26:25.900531 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:26:25.900531 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:26:25.900531 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:26:25.900531 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:26:25.900144 unknown[826]: wrote ssh authorized keys file for user: core Feb 8 23:26:25.907140 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:26:25.907140 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:25.931121 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:26:25.996738 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:26:25.996738 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:26:25.999335 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:26.332257 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:26:26.415668 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:26:26.417686 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:26:26.417686 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:26:26.417686 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:26:26.430076 systemd-networkd[702]: eth0: Gained IPv6LL Feb 8 23:26:26.690192 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:26:26.801228 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:26:26.803385 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:26:26.803385 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:26:26.805908 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:26:26.807099 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:26:26.808220 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:26:26.875384 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:26:27.061560 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:26:27.063811 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:26:27.063811 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:26:27.063811 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:26:27.112077 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:26:27.707404 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:26:27.707404 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:26:27.711047 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:26:27.711047 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:26:27.755727 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:26:27.926989 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:26:27.929052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:26:27.929052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:26:27.929052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:28.348364 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:26:28.423308 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:26:28.424608 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:26:28.425717 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:26:28.426838 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:26:28.427993 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:26:28.429252 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:26:28.430731 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:26:28.432208 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:26:28.433592 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:26:28.434854 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:26:28.436798 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:26:28.436798 ignition[826]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:26:28.439009 ignition[826]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:26:28.440515 ignition[826]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:26:28.440515 ignition[826]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:26:28.443421 ignition[826]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 8 23:26:28.443421 ignition[826]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:26:28.445623 ignition[826]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:26:28.445623 ignition[826]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 8 23:26:28.445623 ignition[826]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 8 23:26:28.445623 ignition[826]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 8 23:26:28.450544 ignition[826]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 8 23:26:28.460541 ignition[826]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:26:28.480827 ignition[826]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 8 23:26:28.482310 ignition[826]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 8 23:26:28.483570 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:26:28.484986 ignition[826]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:26:28.486218 ignition[826]: INFO : files: files passed Feb 8 23:26:28.486218 ignition[826]: INFO : Ignition finished successfully Feb 8 23:26:28.488267 systemd[1]: Finished ignition-files.service. Feb 8 23:26:28.492014 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 8 23:26:28.492036 kernel: audit: type=1130 audit(1707434788.487:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.489734 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:26:28.492228 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:26:28.493015 systemd[1]: Starting ignition-quench.service... Feb 8 23:26:28.495469 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:26:28.500679 kernel: audit: type=1130 audit(1707434788.494:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.500696 kernel: audit: type=1131 audit(1707434788.494:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.495550 systemd[1]: Finished ignition-quench.service. Feb 8 23:26:28.502505 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 8 23:26:28.504766 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:26:28.506300 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:26:28.510549 kernel: audit: type=1130 audit(1707434788.506:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.507156 systemd[1]: Reached target ignition-complete.target. Feb 8 23:26:28.511466 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:26:28.524543 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:26:28.524614 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:26:28.530835 kernel: audit: type=1130 audit(1707434788.524:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.530850 kernel: audit: type=1131 audit(1707434788.524:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.525859 systemd[1]: Reached target initrd-fs.target. Feb 8 23:26:28.530840 systemd[1]: Reached target initrd.target. Feb 8 23:26:28.531431 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:26:28.532087 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:26:28.541184 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:26:28.544831 kernel: audit: type=1130 audit(1707434788.540:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.542335 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:26:28.550178 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:26:28.551036 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:26:28.552415 systemd[1]: Stopped target timers.target. Feb 8 23:26:28.553757 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:26:28.559016 kernel: audit: type=1131 audit(1707434788.554:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.553883 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:26:28.555146 systemd[1]: Stopped target initrd.target. Feb 8 23:26:28.559074 systemd[1]: Stopped target basic.target. Feb 8 23:26:28.559743 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:26:28.561144 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:26:28.562458 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:26:28.563810 systemd[1]: Stopped target remote-fs.target. Feb 8 23:26:28.565215 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:26:28.566585 systemd[1]: Stopped target sysinit.target. Feb 8 23:26:28.567782 systemd[1]: Stopped target local-fs.target. Feb 8 23:26:28.568790 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:26:28.575086 kernel: audit: type=1131 audit(1707434788.570:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.569835 systemd[1]: Stopped target swap.target. Feb 8 23:26:28.570758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:26:28.579227 kernel: audit: type=1131 audit(1707434788.576:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.570854 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:26:28.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.571840 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:26:28.575116 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:26:28.575198 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:26:28.576194 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:26:28.576275 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:26:28.579333 systemd[1]: Stopped target paths.target. Feb 8 23:26:28.580265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:26:28.582026 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:26:28.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.583279 systemd[1]: Stopped target slices.target. Feb 8 23:26:28.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.584284 systemd[1]: Stopped target sockets.target. Feb 8 23:26:28.585256 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:26:28.590246 iscsid[707]: iscsid shutting down. Feb 8 23:26:28.585344 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:26:28.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.586593 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:26:28.598075 ignition[867]: INFO : Ignition 2.14.0 Feb 8 23:26:28.598075 ignition[867]: INFO : Stage: umount Feb 8 23:26:28.598075 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 8 23:26:28.598075 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 8 23:26:28.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.586671 systemd[1]: Stopped ignition-files.service. Feb 8 23:26:28.606148 ignition[867]: INFO : umount: umount passed Feb 8 23:26:28.606148 ignition[867]: INFO : Ignition finished successfully Feb 8 23:26:28.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.588414 systemd[1]: Stopping ignition-mount.service... Feb 8 23:26:28.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.589238 systemd[1]: Stopping iscsid.service... Feb 8 23:26:28.590833 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:26:28.591936 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:26:28.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.592062 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:26:28.592728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:26:28.592803 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:26:28.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.594588 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:26:28.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.594838 systemd[1]: Stopped iscsid.service. Feb 8 23:26:28.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.595684 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:26:28.595745 systemd[1]: Closed iscsid.socket. Feb 8 23:26:28.596319 systemd[1]: Stopping iscsiuio.service... Feb 8 23:26:28.597713 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:26:28.597781 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:26:28.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.599558 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:26:28.599618 systemd[1]: Stopped iscsiuio.service. Feb 8 23:26:28.602118 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:26:28.602180 systemd[1]: Stopped ignition-mount.service. Feb 8 23:26:28.603188 systemd[1]: Stopped target network.target. Feb 8 23:26:28.604250 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:26:28.604284 systemd[1]: Closed iscsiuio.socket. Feb 8 23:26:28.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.604842 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:26:28.627000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:26:28.604878 systemd[1]: Stopped ignition-disks.service. Feb 8 23:26:28.606112 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:26:28.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.606141 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:26:28.606343 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:26:28.606371 systemd[1]: Stopped ignition-setup.service. Feb 8 23:26:28.606659 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:26:28.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.607010 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:26:28.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.608013 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:26:28.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.608960 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:26:28.609062 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:26:28.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.609890 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:26:28.609929 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:26:28.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:28.611042 systemd-networkd[702]: eth0: DHCPv6 lease lost Feb 8 23:26:28.641000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:26:28.611834 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:26:28.611915 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:26:28.613887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:26:28.613922 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:26:28.615472 systemd[1]: Stopping network-cleanup.service... Feb 8 23:26:28.616044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:26:28.616080 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:26:28.617274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:26:28.617308 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:26:28.618416 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:26:28.618454 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:26:28.619157 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:26:28.621524 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:26:28.621873 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:26:28.621952 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:26:28.626572 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:26:28.626655 systemd[1]: Stopped network-cleanup.service. Feb 8 23:26:28.628835 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:26:28.628942 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:26:28.630674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:26:28.630705 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:26:28.631690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:26:28.631714 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:26:28.632944 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:26:28.633005 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:26:28.634031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:26:28.634060 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:26:28.635089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:26:28.635119 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:26:28.635809 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:26:28.636729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:26:28.636768 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:26:28.637452 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:26:28.637484 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:26:28.638592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:26:28.638644 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:26:28.640616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:26:28.640983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:26:28.641045 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:26:28.641840 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:26:28.643386 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:26:28.656476 systemd[1]: Switching root. Feb 8 23:26:28.674614 systemd-journald[197]: Journal stopped Feb 8 23:26:31.478900 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 8 23:26:31.478962 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:26:31.478987 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:26:31.478999 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:26:31.479010 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:26:31.479021 kernel: SELinux: policy capability open_perms=1 Feb 8 23:26:31.479032 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:26:31.479042 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:26:31.479056 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:26:31.479067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:26:31.479081 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:26:31.479092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:26:31.479103 systemd[1]: Successfully loaded SELinux policy in 35.599ms. Feb 8 23:26:31.479127 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.271ms. Feb 8 23:26:31.479140 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:26:31.479154 systemd[1]: Detected virtualization kvm. Feb 8 23:26:31.479166 systemd[1]: Detected architecture x86-64. Feb 8 23:26:31.479179 systemd[1]: Detected first boot. Feb 8 23:26:31.479191 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:26:31.479202 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:26:31.479213 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:26:31.479225 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:26:31.479241 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:26:31.479254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:26:31.479268 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:26:31.479279 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:26:31.479291 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:26:31.479303 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:26:31.479317 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:26:31.479328 systemd[1]: Created slice system-getty.slice. Feb 8 23:26:31.479340 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:26:31.479352 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:26:31.479366 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:26:31.479383 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:26:31.479398 systemd[1]: Created slice user.slice. Feb 8 23:26:31.479410 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:26:31.479421 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:26:31.479431 systemd[1]: Set up automount boot.automount. Feb 8 23:26:31.479441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:26:31.479451 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:26:31.479461 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:26:31.479473 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:26:31.479483 systemd[1]: Reached target integritysetup.target. Feb 8 23:26:31.479493 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:26:31.479503 systemd[1]: Reached target remote-fs.target. Feb 8 23:26:31.479513 systemd[1]: Reached target slices.target. Feb 8 23:26:31.479523 systemd[1]: Reached target swap.target. Feb 8 23:26:31.479533 systemd[1]: Reached target torcx.target. Feb 8 23:26:31.479543 systemd[1]: Reached target veritysetup.target. Feb 8 23:26:31.479555 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:26:31.479566 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:26:31.479576 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:26:31.480275 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:26:31.480290 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:26:31.480300 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:26:31.480311 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:26:31.480321 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:26:31.480332 systemd[1]: Mounting media.mount... Feb 8 23:26:31.480343 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:26:31.480355 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:26:31.480365 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:26:31.480375 systemd[1]: Mounting tmp.mount... Feb 8 23:26:31.480385 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:26:31.480396 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:26:31.480406 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:26:31.480416 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:26:31.480426 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:26:31.480436 systemd[1]: Starting modprobe@drm.service... Feb 8 23:26:31.480447 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:26:31.480457 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:26:31.480467 systemd[1]: Starting modprobe@loop.service... Feb 8 23:26:31.480477 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:26:31.480487 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:26:31.480497 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:26:31.480507 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:26:31.480517 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:26:31.480527 systemd[1]: Stopped systemd-journald.service. Feb 8 23:26:31.480538 kernel: fuse: init (API version 7.34) Feb 8 23:26:31.480549 systemd[1]: Starting systemd-journald.service... Feb 8 23:26:31.480560 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:26:31.480570 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:26:31.480580 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:26:31.480590 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:26:31.480599 kernel: loop: module loaded Feb 8 23:26:31.480609 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:26:31.480619 systemd[1]: Stopped verity-setup.service. Feb 8 23:26:31.480631 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:26:31.480641 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:26:31.480651 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:26:31.480666 systemd-journald[975]: Journal started Feb 8 23:26:31.480708 systemd-journald[975]: Runtime Journal (/run/log/journal/89d000c8f51440778c95bc0e35e78446) is 6.0M, max 48.5M, 42.5M free. Feb 8 23:26:28.728000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:26:29.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:26:29.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:26:29.291000 audit: BPF prog-id=10 op=LOAD Feb 8 23:26:29.291000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:26:29.291000 audit: BPF prog-id=11 op=LOAD Feb 8 23:26:29.291000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:26:29.322000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:26:29.322000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:29.322000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:26:29.323000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:26:29.323000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:29.323000 audit: CWD cwd="/" Feb 8 23:26:29.323000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:29.323000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:29.323000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:26:31.373000 audit: BPF prog-id=12 op=LOAD Feb 8 23:26:31.373000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:26:31.373000 audit: BPF prog-id=13 op=LOAD Feb 8 23:26:31.373000 audit: BPF prog-id=14 op=LOAD Feb 8 23:26:31.373000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:26:31.373000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=15 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=16 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=17 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=18 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=19 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=20 op=LOAD Feb 8 23:26:31.374000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:26:31.374000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:26:31.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.388000 audit: BPF prog-id=18 op=UNLOAD Feb 8 23:26:31.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.458000 audit: BPF prog-id=21 op=LOAD Feb 8 23:26:31.458000 audit: BPF prog-id=22 op=LOAD Feb 8 23:26:31.458000 audit: BPF prog-id=23 op=LOAD Feb 8 23:26:31.458000 audit: BPF prog-id=19 op=UNLOAD Feb 8 23:26:31.459000 audit: BPF prog-id=20 op=UNLOAD Feb 8 23:26:31.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.476000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:26:31.476000 audit[975]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffee6165ac0 a2=4000 a3=7ffee6165b5c items=0 ppid=1 pid=975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:31.476000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:26:31.372815 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:26:31.483043 systemd[1]: Started systemd-journald.service. Feb 8 23:26:31.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:29.322484 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:26:31.372826 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:26:29.322652 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:26:31.376191 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:26:29.322667 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:26:31.483138 systemd[1]: Mounted media.mount. Feb 8 23:26:29.322693 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:26:31.483749 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:26:29.322701 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:26:31.484467 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:26:29.322725 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:26:31.485237 systemd[1]: Mounted tmp.mount. Feb 8 23:26:29.322738 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:26:29.322937 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:26:29.322967 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:26:31.486146 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:26:31.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:29.322988 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:26:29.323245 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:26:29.323273 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:26:29.323287 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:26:29.323299 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:26:31.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.487209 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:26:29.323312 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:26:29.323326 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:26:31.120252 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:31.120490 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:31.488254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:26:31.120566 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:31.120705 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:31.488414 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:26:31.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.120748 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:26:31.120798 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-02-08T23:26:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:26:31.489420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:26:31.489592 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:26:31.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.490465 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:26:31.490629 systemd[1]: Finished modprobe@drm.service. Feb 8 23:26:31.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.491417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:26:31.491549 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:26:31.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.492321 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:26:31.492438 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:26:31.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.493159 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:26:31.493262 systemd[1]: Finished modprobe@loop.service. Feb 8 23:26:31.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.494034 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:26:31.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.494811 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:26:31.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.495670 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:26:31.496832 systemd[1]: Reached target network-pre.target. Feb 8 23:26:31.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.498625 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:26:31.500043 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:26:31.500606 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:26:31.504626 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:26:31.509087 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:26:31.509923 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:26:31.510621 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:26:31.511349 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:26:31.512067 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:26:31.513558 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:26:31.518628 systemd-journald[975]: Time spent on flushing to /var/log/journal/89d000c8f51440778c95bc0e35e78446 is 13.843ms for 1139 entries. Feb 8 23:26:31.518628 systemd-journald[975]: System Journal (/var/log/journal/89d000c8f51440778c95bc0e35e78446) is 8.0M, max 195.6M, 187.6M free. Feb 8 23:26:31.655142 systemd-journald[975]: Received client request to flush runtime journal. Feb 8 23:26:31.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.515959 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:26:31.517495 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:26:31.655872 udevadm[1004]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:26:31.518859 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:26:31.520238 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:26:31.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.523667 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:26:31.527774 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:26:31.533961 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:26:31.546349 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:26:31.547852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:26:31.563233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:26:31.656144 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:26:31.957764 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:26:31.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.958000 audit: BPF prog-id=24 op=LOAD Feb 8 23:26:31.958000 audit: BPF prog-id=25 op=LOAD Feb 8 23:26:31.958000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:26:31.958000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:26:31.959751 systemd[1]: Starting systemd-udevd.service... Feb 8 23:26:31.973640 systemd-udevd[1008]: Using default interface naming scheme 'v252'. Feb 8 23:26:31.985642 systemd[1]: Started systemd-udevd.service. Feb 8 23:26:31.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:31.986000 audit: BPF prog-id=26 op=LOAD Feb 8 23:26:31.988343 systemd[1]: Starting systemd-networkd.service... Feb 8 23:26:31.992000 audit: BPF prog-id=27 op=LOAD Feb 8 23:26:31.992000 audit: BPF prog-id=28 op=LOAD Feb 8 23:26:31.992000 audit: BPF prog-id=29 op=LOAD Feb 8 23:26:31.993880 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:26:32.006795 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:26:32.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.023785 systemd[1]: Started systemd-userdbd.service. Feb 8 23:26:32.048996 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:26:32.057000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:26:32.062931 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:26:32.057000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556645166ca0 a1=32194 a2=7fb0ba7eebc5 a3=5 items=108 ppid=1008 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:32.057000 audit: CWD cwd="/" Feb 8 23:26:32.057000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=1 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=2 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=3 name=(null) inode=12787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=4 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=5 name=(null) inode=12788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=6 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=7 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=8 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=9 name=(null) inode=12790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=10 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=11 name=(null) inode=12791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=12 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=13 name=(null) inode=12792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=14 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=15 name=(null) inode=12793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=16 name=(null) inode=12789 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=17 name=(null) inode=12794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=18 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=19 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=20 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=21 name=(null) inode=12796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=22 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=23 name=(null) inode=12797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=24 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=25 name=(null) inode=12798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=26 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=27 name=(null) inode=12799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=28 name=(null) inode=12795 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=29 name=(null) inode=12800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=30 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=31 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=32 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=33 name=(null) inode=12802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=34 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=35 name=(null) inode=12803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=36 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=37 name=(null) inode=12804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=38 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=39 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=40 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=41 name=(null) inode=12806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=42 name=(null) inode=12786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=43 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=44 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=45 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=46 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=47 name=(null) inode=12809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=48 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=49 name=(null) inode=12810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=50 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=51 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=52 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=53 name=(null) inode=12812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=55 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=56 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=57 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=58 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=59 name=(null) inode=12815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=60 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=61 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=62 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=63 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=64 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=65 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=66 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=67 name=(null) inode=12819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=68 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=69 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=70 name=(null) inode=12816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=71 name=(null) inode=12821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=72 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=73 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=74 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=75 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=76 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=77 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=78 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=79 name=(null) inode=12825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=80 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=81 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=82 name=(null) inode=12822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=83 name=(null) inode=12827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=84 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=85 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=86 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=87 name=(null) inode=12829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=88 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=89 name=(null) inode=12830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=90 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=91 name=(null) inode=12831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=92 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=93 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=94 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=95 name=(null) inode=12833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=96 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=97 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=98 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=99 name=(null) inode=12835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=100 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=101 name=(null) inode=12836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=102 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=103 name=(null) inode=12837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=104 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=105 name=(null) inode=12838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=106 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PATH item=107 name=(null) inode=12839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:32.057000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:26:32.069000 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:26:32.069293 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:26:32.076769 systemd-networkd[1019]: lo: Link UP Feb 8 23:26:32.076783 systemd-networkd[1019]: lo: Gained carrier Feb 8 23:26:32.077250 systemd-networkd[1019]: Enumeration completed Feb 8 23:26:32.077404 systemd[1]: Started systemd-networkd.service. Feb 8 23:26:32.077422 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:26:32.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.078646 systemd-networkd[1019]: eth0: Link UP Feb 8 23:26:32.078654 systemd-networkd[1019]: eth0: Gained carrier Feb 8 23:26:32.081999 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:26:32.091107 systemd-networkd[1019]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 8 23:26:32.100995 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:26:32.159260 kernel: kvm: Nested Virtualization enabled Feb 8 23:26:32.159347 kernel: SVM: kvm: Nested Paging enabled Feb 8 23:26:32.159379 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 8 23:26:32.160118 kernel: SVM: Virtual GIF supported Feb 8 23:26:32.173997 kernel: EDAC MC: Ver: 3.0.0 Feb 8 23:26:32.194289 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:26:32.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.195983 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:26:32.202934 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:26:32.227648 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:26:32.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.228444 systemd[1]: Reached target cryptsetup.target. Feb 8 23:26:32.230085 systemd[1]: Starting lvm2-activation.service... Feb 8 23:26:32.233195 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:26:32.259618 systemd[1]: Finished lvm2-activation.service. Feb 8 23:26:32.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.260319 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:26:32.260912 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:26:32.260931 systemd[1]: Reached target local-fs.target. Feb 8 23:26:32.261488 systemd[1]: Reached target machines.target. Feb 8 23:26:32.262919 systemd[1]: Starting ldconfig.service... Feb 8 23:26:32.263759 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:26:32.263844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:32.265013 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:26:32.266498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:26:32.267998 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:26:32.268707 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:26:32.268744 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:26:32.269537 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:26:32.270446 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Feb 8 23:26:32.271239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:26:32.274280 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:26:32.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.286225 systemd-tmpfiles[1049]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:26:32.287942 systemd-tmpfiles[1049]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:26:32.290748 systemd-tmpfiles[1049]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:26:32.310585 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) Feb 8 23:26:32.310585 systemd-fsck[1054]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:26:32.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.313346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:26:32.479944 systemd[1]: Mounting boot.mount... Feb 8 23:26:32.582235 systemd[1]: Mounted boot.mount. Feb 8 23:26:32.594584 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:26:32.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.619547 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:26:32.650270 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:26:32.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.652341 systemd[1]: Starting audit-rules.service... Feb 8 23:26:32.654554 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:26:32.656458 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:26:32.657000 audit: BPF prog-id=30 op=LOAD Feb 8 23:26:32.659057 systemd[1]: Starting systemd-resolved.service... Feb 8 23:26:32.659000 audit: BPF prog-id=31 op=LOAD Feb 8 23:26:32.661823 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:26:32.663846 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:26:32.665023 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:26:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.666107 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:26:32.670000 audit[1069]: SYSTEM_BOOT pid=1069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.673483 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:26:32.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.676126 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:26:32.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:32.679000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:26:32.679000 audit[1078]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff3b4f2fa0 a2=420 a3=0 items=0 ppid=1058 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:32.679000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:26:32.680264 augenrules[1078]: No rules Feb 8 23:26:32.680630 systemd[1]: Finished audit-rules.service. Feb 8 23:26:32.705302 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:26:32.706257 systemd[1]: Reached target time-set.target. Feb 8 23:26:32.706483 systemd-timesyncd[1068]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 8 23:26:32.706519 systemd-timesyncd[1068]: Initial clock synchronization to Thu 2024-02-08 23:26:32.914508 UTC. Feb 8 23:26:32.730567 systemd-resolved[1064]: Positive Trust Anchors: Feb 8 23:26:32.730579 systemd-resolved[1064]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:26:32.730616 systemd-resolved[1064]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:26:32.730906 systemd[1]: Finished ldconfig.service. Feb 8 23:26:32.732748 systemd[1]: Starting systemd-update-done.service... Feb 8 23:26:32.738492 systemd[1]: Finished systemd-update-done.service. Feb 8 23:26:32.742077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:26:32.742602 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:26:32.743589 systemd-resolved[1064]: Defaulting to hostname 'linux'. Feb 8 23:26:32.745136 systemd[1]: Started systemd-resolved.service. Feb 8 23:26:32.745772 systemd[1]: Reached target network.target. Feb 8 23:26:32.746440 systemd[1]: Reached target nss-lookup.target. Feb 8 23:26:32.747090 systemd[1]: Reached target sysinit.target. Feb 8 23:26:32.747705 systemd[1]: Started motdgen.path. Feb 8 23:26:32.748258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:26:32.749168 systemd[1]: Started logrotate.timer. Feb 8 23:26:32.749742 systemd[1]: Started mdadm.timer. Feb 8 23:26:32.750297 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:26:32.750890 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:26:32.750917 systemd[1]: Reached target paths.target. Feb 8 23:26:32.751451 systemd[1]: Reached target timers.target. Feb 8 23:26:32.752317 systemd[1]: Listening on dbus.socket. Feb 8 23:26:32.753846 systemd[1]: Starting docker.socket... Feb 8 23:26:32.756369 systemd[1]: Listening on sshd.socket. Feb 8 23:26:32.757071 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:32.757421 systemd[1]: Listening on docker.socket. Feb 8 23:26:32.758025 systemd[1]: Reached target sockets.target. Feb 8 23:26:32.758569 systemd[1]: Reached target basic.target. Feb 8 23:26:32.759152 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:26:32.759172 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:26:32.760097 systemd[1]: Starting containerd.service... Feb 8 23:26:32.761505 systemd[1]: Starting dbus.service... Feb 8 23:26:32.762724 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:26:32.764214 systemd[1]: Starting extend-filesystems.service... Feb 8 23:26:32.764897 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:26:32.765715 systemd[1]: Starting motdgen.service... Feb 8 23:26:32.765954 jq[1089]: false Feb 8 23:26:32.767683 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:26:32.769376 systemd[1]: Starting prepare-critools.service... Feb 8 23:26:32.771849 systemd[1]: Starting prepare-helm.service... Feb 8 23:26:32.773327 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:26:32.775393 systemd[1]: Starting sshd-keygen.service... Feb 8 23:26:32.778495 systemd[1]: Starting systemd-logind.service... Feb 8 23:26:32.783757 extend-filesystems[1090]: Found sr0 Feb 8 23:26:32.779410 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:32.779508 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:26:32.780189 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:26:32.781235 systemd[1]: Starting update-engine.service... Feb 8 23:26:32.783018 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:26:32.785659 jq[1110]: true Feb 8 23:26:32.786103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:26:32.786005 dbus-daemon[1088]: [system] SELinux support is enabled Feb 8 23:26:32.786263 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:26:32.786391 systemd[1]: Started dbus.service. Feb 8 23:26:32.789460 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda1 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda2 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda3 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found usr Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda4 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda6 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda7 Feb 8 23:26:32.813111 extend-filesystems[1090]: Found vda9 Feb 8 23:26:32.813111 extend-filesystems[1090]: Checking size of /dev/vda9 Feb 8 23:26:32.789607 systemd[1]: Finished motdgen.service. Feb 8 23:26:32.823399 update_engine[1108]: I0208 23:26:32.811869 1108 main.cc:92] Flatcar Update Engine starting Feb 8 23:26:32.823399 update_engine[1108]: I0208 23:26:32.814074 1108 update_check_scheduler.cc:74] Next update check in 8m19s Feb 8 23:26:32.823601 tar[1113]: crictl Feb 8 23:26:32.792246 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:26:32.823864 tar[1115]: linux-amd64/helm Feb 8 23:26:32.792443 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:26:32.824102 jq[1116]: true Feb 8 23:26:32.795157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:26:32.795181 systemd[1]: Reached target system-config.target. Feb 8 23:26:32.824407 tar[1112]: ./ Feb 8 23:26:32.824407 tar[1112]: ./macvlan Feb 8 23:26:32.796034 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:26:32.796050 systemd[1]: Reached target user-config.target. Feb 8 23:26:32.814088 systemd[1]: Started update-engine.service. Feb 8 23:26:32.816650 systemd[1]: Started locksmithd.service. Feb 8 23:26:32.826757 extend-filesystems[1090]: Resized partition /dev/vda9 Feb 8 23:26:32.829301 extend-filesystems[1146]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:26:32.830197 bash[1141]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:26:32.829921 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:26:32.831989 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 8 23:26:32.850154 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 8 23:26:32.866236 env[1117]: time="2024-02-08T23:26:32.850462986Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:26:32.861470 systemd[1]: Created slice system-sshd.slice. Feb 8 23:26:32.866449 systemd-logind[1104]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:26:32.866465 systemd-logind[1104]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:26:32.866896 systemd-logind[1104]: New seat seat0. Feb 8 23:26:32.868887 extend-filesystems[1146]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:26:32.868887 extend-filesystems[1146]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 8 23:26:32.868887 extend-filesystems[1146]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 8 23:26:32.872738 extend-filesystems[1090]: Resized filesystem in /dev/vda9 Feb 8 23:26:32.873669 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:26:32.873836 systemd[1]: Finished extend-filesystems.service. Feb 8 23:26:32.874863 systemd[1]: Started systemd-logind.service. Feb 8 23:26:32.878712 tar[1112]: ./static Feb 8 23:26:32.884893 env[1117]: time="2024-02-08T23:26:32.884618948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:26:32.884893 env[1117]: time="2024-02-08T23:26:32.884769851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.886607 env[1117]: time="2024-02-08T23:26:32.886449771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:26:32.886607 env[1117]: time="2024-02-08T23:26:32.886492421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.886935 env[1117]: time="2024-02-08T23:26:32.886915043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887045 env[1117]: time="2024-02-08T23:26:32.887026713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887120 env[1117]: time="2024-02-08T23:26:32.887101122Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:26:32.887189 env[1117]: time="2024-02-08T23:26:32.887171194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887321 env[1117]: time="2024-02-08T23:26:32.887304223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887594 env[1117]: time="2024-02-08T23:26:32.887577505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887774 env[1117]: time="2024-02-08T23:26:32.887755589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:26:32.887858 env[1117]: time="2024-02-08T23:26:32.887840278Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:26:32.887988 env[1117]: time="2024-02-08T23:26:32.887948521Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:26:32.888058 env[1117]: time="2024-02-08T23:26:32.888041225Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898614340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898668792Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898680794Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898714367Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898775772Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898789117Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898821768Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898835755Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898851113Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898863186Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898873976Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.898896579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.899055807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:26:32.900644 env[1117]: time="2024-02-08T23:26:32.899135998Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899411685Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899433175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899448433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899511391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899525318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899592413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899603654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899625375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899636386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899646184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899656243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899667664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899794051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899816093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901394 env[1117]: time="2024-02-08T23:26:32.899828065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901667 env[1117]: time="2024-02-08T23:26:32.899851279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:26:32.901667 env[1117]: time="2024-02-08T23:26:32.899865285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:26:32.901667 env[1117]: time="2024-02-08T23:26:32.899876055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:26:32.901667 env[1117]: time="2024-02-08T23:26:32.899894650Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:26:32.901667 env[1117]: time="2024-02-08T23:26:32.899940175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:26:32.901761 env[1117]: time="2024-02-08T23:26:32.900159266Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:26:32.901761 env[1117]: time="2024-02-08T23:26:32.900215912Z" level=info msg="Connect containerd service" Feb 8 23:26:32.901761 env[1117]: time="2024-02-08T23:26:32.900245067Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:26:32.901761 env[1117]: time="2024-02-08T23:26:32.901221368Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.901945646Z" level=info msg="Start subscribing containerd event" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.901999306Z" level=info msg="Start recovering state" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.902046715Z" level=info msg="Start event monitor" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.902055001Z" level=info msg="Start snapshots syncer" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.902062665Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:26:32.902786 env[1117]: time="2024-02-08T23:26:32.902070510Z" level=info msg="Start streaming server" Feb 8 23:26:32.902907 tar[1112]: ./vlan Feb 8 23:26:32.903093 env[1117]: time="2024-02-08T23:26:32.903078530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:26:32.903223 env[1117]: time="2024-02-08T23:26:32.903207311Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:26:32.903441 systemd[1]: Started containerd.service. Feb 8 23:26:32.932537 env[1117]: time="2024-02-08T23:26:32.932485829Z" level=info msg="containerd successfully booted in 0.071969s" Feb 8 23:26:32.935141 tar[1112]: ./portmap Feb 8 23:26:32.965069 tar[1112]: ./host-local Feb 8 23:26:32.991348 tar[1112]: ./vrf Feb 8 23:26:33.019398 tar[1112]: ./bridge Feb 8 23:26:33.058521 tar[1112]: ./tuning Feb 8 23:26:33.088107 tar[1112]: ./firewall Feb 8 23:26:33.128650 tar[1112]: ./host-device Feb 8 23:26:33.159444 tar[1112]: ./sbr Feb 8 23:26:33.188071 tar[1112]: ./loopback Feb 8 23:26:33.215209 tar[1112]: ./dhcp Feb 8 23:26:33.268554 systemd[1]: Finished prepare-critools.service. Feb 8 23:26:33.280812 tar[1115]: linux-amd64/LICENSE Feb 8 23:26:33.280953 tar[1115]: linux-amd64/README.md Feb 8 23:26:33.281697 locksmithd[1142]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:26:33.286286 systemd[1]: Finished prepare-helm.service. Feb 8 23:26:33.293686 tar[1112]: ./ptp Feb 8 23:26:33.326658 tar[1112]: ./ipvlan Feb 8 23:26:33.356030 tar[1112]: ./bandwidth Feb 8 23:26:33.393861 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:26:33.781884 sshd_keygen[1109]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:26:33.799703 systemd[1]: Finished sshd-keygen.service. Feb 8 23:26:33.801659 systemd[1]: Starting issuegen.service... Feb 8 23:26:33.802962 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:60360.service. Feb 8 23:26:33.806464 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:26:33.806580 systemd[1]: Finished issuegen.service. Feb 8 23:26:33.808109 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:26:33.813657 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:26:33.815378 systemd[1]: Started getty@tty1.service. Feb 8 23:26:33.816892 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:26:33.817724 systemd[1]: Reached target getty.target. Feb 8 23:26:33.818405 systemd[1]: Reached target multi-user.target. Feb 8 23:26:33.819959 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:26:33.827212 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:26:33.827339 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:26:33.828289 systemd[1]: Startup finished in 530ms (kernel) + 6.021s (initrd) + 5.136s (userspace) = 11.688s. Feb 8 23:26:33.843820 sshd[1173]: Accepted publickey for core from 10.0.0.1 port 60360 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:26:33.845068 sshd[1173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:33.852184 systemd-logind[1104]: New session 1 of user core. Feb 8 23:26:33.853018 systemd[1]: Created slice user-500.slice. Feb 8 23:26:33.853928 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:26:33.860289 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:26:33.861251 systemd[1]: Starting user@500.service... Feb 8 23:26:33.863266 (systemd)[1183]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:33.925541 systemd[1183]: Queued start job for default target default.target. Feb 8 23:26:33.925915 systemd[1183]: Reached target paths.target. Feb 8 23:26:33.925930 systemd[1183]: Reached target sockets.target. Feb 8 23:26:33.925941 systemd[1183]: Reached target timers.target. Feb 8 23:26:33.925952 systemd[1183]: Reached target basic.target. Feb 8 23:26:33.925982 systemd[1183]: Reached target default.target. Feb 8 23:26:33.926031 systemd[1183]: Startup finished in 58ms. Feb 8 23:26:33.926062 systemd[1]: Started user@500.service. Feb 8 23:26:33.926859 systemd[1]: Started session-1.scope. Feb 8 23:26:33.977725 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:60364.service. Feb 8 23:26:33.982164 systemd-networkd[1019]: eth0: Gained IPv6LL Feb 8 23:26:34.015320 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 60364 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:26:34.016585 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:34.020045 systemd-logind[1104]: New session 2 of user core. Feb 8 23:26:34.020877 systemd[1]: Started session-2.scope. Feb 8 23:26:34.074941 sshd[1192]: pam_unix(sshd:session): session closed for user core Feb 8 23:26:34.077292 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:60364.service: Deactivated successfully. Feb 8 23:26:34.077944 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:26:34.078535 systemd-logind[1104]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:26:34.079582 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:60370.service. Feb 8 23:26:34.080382 systemd-logind[1104]: Removed session 2. Feb 8 23:26:34.117578 sshd[1198]: Accepted publickey for core from 10.0.0.1 port 60370 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:26:34.118528 sshd[1198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:34.121607 systemd-logind[1104]: New session 3 of user core. Feb 8 23:26:34.122534 systemd[1]: Started session-3.scope. Feb 8 23:26:34.171961 sshd[1198]: pam_unix(sshd:session): session closed for user core Feb 8 23:26:34.174111 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:60370.service: Deactivated successfully. Feb 8 23:26:34.174556 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:26:34.174965 systemd-logind[1104]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:26:34.175704 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:60374.service. Feb 8 23:26:34.176356 systemd-logind[1104]: Removed session 3. Feb 8 23:26:34.211463 sshd[1204]: Accepted publickey for core from 10.0.0.1 port 60374 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:26:34.212363 sshd[1204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:34.214951 systemd-logind[1104]: New session 4 of user core. Feb 8 23:26:34.215638 systemd[1]: Started session-4.scope. Feb 8 23:26:34.268540 sshd[1204]: pam_unix(sshd:session): session closed for user core Feb 8 23:26:34.271082 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:60374.service: Deactivated successfully. Feb 8 23:26:34.271534 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:26:34.271938 systemd-logind[1104]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:26:34.272683 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:60382.service. Feb 8 23:26:34.273158 systemd-logind[1104]: Removed session 4. Feb 8 23:26:34.310255 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:26:34.311334 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:26:34.314485 systemd-logind[1104]: New session 5 of user core. Feb 8 23:26:34.315301 systemd[1]: Started session-5.scope. Feb 8 23:26:34.371962 sudo[1213]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:26:34.372148 sudo[1213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:26:34.917729 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:26:34.928720 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:26:34.929136 systemd[1]: Reached target network-online.target. Feb 8 23:26:34.931931 systemd[1]: Starting docker.service... Feb 8 23:26:34.986038 env[1231]: time="2024-02-08T23:26:34.985371295Z" level=info msg="Starting up" Feb 8 23:26:34.987765 env[1231]: time="2024-02-08T23:26:34.987237269Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:26:34.987765 env[1231]: time="2024-02-08T23:26:34.987275069Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:26:34.987765 env[1231]: time="2024-02-08T23:26:34.987302766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:26:34.987765 env[1231]: time="2024-02-08T23:26:34.987316713Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:26:34.989951 env[1231]: time="2024-02-08T23:26:34.989667677Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:26:34.989951 env[1231]: time="2024-02-08T23:26:34.989692935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:26:34.989951 env[1231]: time="2024-02-08T23:26:34.989709566Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:26:34.989951 env[1231]: time="2024-02-08T23:26:34.989721831Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:26:35.625223 env[1231]: time="2024-02-08T23:26:35.625153513Z" level=info msg="Loading containers: start." Feb 8 23:26:35.723028 kernel: Initializing XFRM netlink socket Feb 8 23:26:35.763745 env[1231]: time="2024-02-08T23:26:35.763701700Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:26:35.813629 systemd-networkd[1019]: docker0: Link UP Feb 8 23:26:35.823355 env[1231]: time="2024-02-08T23:26:35.823304965Z" level=info msg="Loading containers: done." Feb 8 23:26:35.831338 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1930749647-merged.mount: Deactivated successfully. Feb 8 23:26:35.835143 env[1231]: time="2024-02-08T23:26:35.835096627Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:26:35.835327 env[1231]: time="2024-02-08T23:26:35.835309965Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:26:35.835432 env[1231]: time="2024-02-08T23:26:35.835418524Z" level=info msg="Daemon has completed initialization" Feb 8 23:26:35.851545 systemd[1]: Started docker.service. Feb 8 23:26:35.861792 env[1231]: time="2024-02-08T23:26:35.861736958Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:26:35.881119 systemd[1]: Reloading. Feb 8 23:26:35.938236 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-02-08T23:26:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:26:35.938261 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-02-08T23:26:35Z" level=info msg="torcx already run" Feb 8 23:26:35.997214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:26:35.997229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:26:36.015477 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:26:36.081451 systemd[1]: Started kubelet.service. Feb 8 23:26:36.148051 kubelet[1411]: E0208 23:26:36.147870 1411 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:26:36.150374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:26:36.150523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:26:36.491371 env[1117]: time="2024-02-08T23:26:36.491316124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:26:37.172274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886262516.mount: Deactivated successfully. Feb 8 23:26:39.449222 env[1117]: time="2024-02-08T23:26:39.449163057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:39.450716 env[1117]: time="2024-02-08T23:26:39.450687158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:39.452519 env[1117]: time="2024-02-08T23:26:39.452495616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:39.455565 env[1117]: time="2024-02-08T23:26:39.455525421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:26:39.456354 env[1117]: time="2024-02-08T23:26:39.456316414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:39.465759 env[1117]: time="2024-02-08T23:26:39.465717299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:26:42.550280 env[1117]: time="2024-02-08T23:26:42.550228270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:42.552205 env[1117]: time="2024-02-08T23:26:42.552162759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:42.553792 env[1117]: time="2024-02-08T23:26:42.553772970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:42.555311 env[1117]: time="2024-02-08T23:26:42.555289427Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:42.555864 env[1117]: time="2024-02-08T23:26:42.555828372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:26:42.570795 env[1117]: time="2024-02-08T23:26:42.570759084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:26:44.056416 env[1117]: time="2024-02-08T23:26:44.056348771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:44.058270 env[1117]: time="2024-02-08T23:26:44.058225902Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:44.060183 env[1117]: time="2024-02-08T23:26:44.060129480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:44.061537 env[1117]: time="2024-02-08T23:26:44.061510272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:44.062418 env[1117]: time="2024-02-08T23:26:44.062373045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:26:44.074696 env[1117]: time="2024-02-08T23:26:44.074655216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:26:45.860183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292741260.mount: Deactivated successfully. Feb 8 23:26:46.184635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:26:46.184814 systemd[1]: Stopped kubelet.service. Feb 8 23:26:46.186051 systemd[1]: Started kubelet.service. Feb 8 23:26:46.298754 kubelet[1453]: E0208 23:26:46.298692 1453 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:26:46.301806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:26:46.301924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:26:46.449608 env[1117]: time="2024-02-08T23:26:46.449499291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.451406 env[1117]: time="2024-02-08T23:26:46.451380376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.452776 env[1117]: time="2024-02-08T23:26:46.452743943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.454762 env[1117]: time="2024-02-08T23:26:46.454727235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.455178 env[1117]: time="2024-02-08T23:26:46.455149077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:26:46.467051 env[1117]: time="2024-02-08T23:26:46.467015332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:26:46.962663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825689592.mount: Deactivated successfully. Feb 8 23:26:46.968068 env[1117]: time="2024-02-08T23:26:46.968027068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.969800 env[1117]: time="2024-02-08T23:26:46.969754947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.971193 env[1117]: time="2024-02-08T23:26:46.971167418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.972703 env[1117]: time="2024-02-08T23:26:46.972677759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:46.973231 env[1117]: time="2024-02-08T23:26:46.973206036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:26:46.982223 env[1117]: time="2024-02-08T23:26:46.982185671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:26:47.982968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304135410.mount: Deactivated successfully. Feb 8 23:26:53.346941 env[1117]: time="2024-02-08T23:26:53.346880763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:53.349003 env[1117]: time="2024-02-08T23:26:53.348951550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:53.350389 env[1117]: time="2024-02-08T23:26:53.350368438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:53.352064 env[1117]: time="2024-02-08T23:26:53.352016885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:53.352543 env[1117]: time="2024-02-08T23:26:53.352499405Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:26:53.360670 env[1117]: time="2024-02-08T23:26:53.360636858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:26:53.976863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790827096.mount: Deactivated successfully. Feb 8 23:26:54.523770 env[1117]: time="2024-02-08T23:26:54.523719259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:54.525514 env[1117]: time="2024-02-08T23:26:54.525489849Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:54.527023 env[1117]: time="2024-02-08T23:26:54.526979751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:54.528303 env[1117]: time="2024-02-08T23:26:54.528271801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:26:54.528677 env[1117]: time="2024-02-08T23:26:54.528651047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:26:56.434761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:26:56.434985 systemd[1]: Stopped kubelet.service. Feb 8 23:26:56.436282 systemd[1]: Started kubelet.service. Feb 8 23:26:56.519116 kubelet[1547]: E0208 23:26:56.519051 1547 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:26:56.521139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:26:56.521254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:26:57.053064 systemd[1]: Stopped kubelet.service. Feb 8 23:26:57.064709 systemd[1]: Reloading. Feb 8 23:26:57.113252 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-08T23:26:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:26:57.113557 /usr/lib/systemd/system-generators/torcx-generator[1581]: time="2024-02-08T23:26:57Z" level=info msg="torcx already run" Feb 8 23:26:57.170068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:26:57.170081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:26:57.188311 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:26:57.258826 systemd[1]: Started kubelet.service. Feb 8 23:26:57.381359 kubelet[1619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:26:57.381359 kubelet[1619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:26:57.381359 kubelet[1619]: I0208 23:26:57.381296 1619 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:26:57.382440 kubelet[1619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:26:57.382440 kubelet[1619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:26:57.776966 kubelet[1619]: I0208 23:26:57.776925 1619 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:26:57.776966 kubelet[1619]: I0208 23:26:57.776949 1619 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:26:57.777181 kubelet[1619]: I0208 23:26:57.777165 1619 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:26:57.782761 kubelet[1619]: I0208 23:26:57.782734 1619 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:26:57.783405 kubelet[1619]: E0208 23:26:57.783368 1619 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.786188 kubelet[1619]: I0208 23:26:57.786165 1619 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:26:57.786354 kubelet[1619]: I0208 23:26:57.786340 1619 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:26:57.786412 kubelet[1619]: I0208 23:26:57.786400 1619 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:26:57.786501 kubelet[1619]: I0208 23:26:57.786418 1619 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:26:57.786501 kubelet[1619]: I0208 23:26:57.786429 1619 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:26:57.786549 kubelet[1619]: I0208 23:26:57.786507 1619 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:26:57.788883 kubelet[1619]: I0208 23:26:57.788869 1619 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:26:57.788883 kubelet[1619]: I0208 23:26:57.788886 1619 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:26:57.788982 kubelet[1619]: I0208 23:26:57.788916 1619 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:26:57.788982 kubelet[1619]: I0208 23:26:57.788930 1619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:26:57.790111 kubelet[1619]: I0208 23:26:57.790098 1619 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:26:57.790205 kubelet[1619]: W0208 23:26:57.790112 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.790205 kubelet[1619]: E0208 23:26:57.790212 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.790205 kubelet[1619]: W0208 23:26:57.790111 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.790401 kubelet[1619]: E0208 23:26:57.790235 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.790466 kubelet[1619]: W0208 23:26:57.790450 1619 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:26:57.790776 kubelet[1619]: I0208 23:26:57.790754 1619 server.go:1186] "Started kubelet" Feb 8 23:26:57.790872 kubelet[1619]: I0208 23:26:57.790855 1619 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:26:57.791087 kubelet[1619]: E0208 23:26:57.791014 1619 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eb9f8cf182", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 790734722, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 790734722, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.113:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.113:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:26:57.791694 kubelet[1619]: I0208 23:26:57.791536 1619 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:26:57.792179 kubelet[1619]: E0208 23:26:57.792150 1619 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:26:57.792228 kubelet[1619]: E0208 23:26:57.792207 1619 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:26:57.793805 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:26:57.793918 kubelet[1619]: I0208 23:26:57.793886 1619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:26:57.795681 kubelet[1619]: I0208 23:26:57.795662 1619 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:26:57.796032 kubelet[1619]: E0208 23:26:57.796007 1619 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.796084 kubelet[1619]: I0208 23:26:57.796041 1619 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:26:57.797000 kubelet[1619]: W0208 23:26:57.796960 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.797302 kubelet[1619]: E0208 23:26:57.797292 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.812166 kubelet[1619]: I0208 23:26:57.812124 1619 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:26:57.812166 kubelet[1619]: I0208 23:26:57.812141 1619 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:26:57.812166 kubelet[1619]: I0208 23:26:57.812153 1619 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:26:57.819004 kubelet[1619]: I0208 23:26:57.818984 1619 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:26:57.836349 kubelet[1619]: I0208 23:26:57.836330 1619 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:26:57.836349 kubelet[1619]: I0208 23:26:57.836349 1619 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:26:57.836469 kubelet[1619]: I0208 23:26:57.836368 1619 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:26:57.836469 kubelet[1619]: I0208 23:26:57.836376 1619 policy_none.go:49] "None policy: Start" Feb 8 23:26:57.836469 kubelet[1619]: E0208 23:26:57.836424 1619 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:26:57.836846 kubelet[1619]: W0208 23:26:57.836780 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.836846 kubelet[1619]: E0208 23:26:57.836849 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.837063 kubelet[1619]: I0208 23:26:57.836928 1619 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:26:57.837063 kubelet[1619]: I0208 23:26:57.836942 1619 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:26:57.842497 systemd[1]: Created slice kubepods.slice. Feb 8 23:26:57.846044 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:26:57.848663 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:26:57.856744 kubelet[1619]: I0208 23:26:57.856707 1619 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:26:57.857387 kubelet[1619]: I0208 23:26:57.857208 1619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:26:57.857787 kubelet[1619]: E0208 23:26:57.857767 1619 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 8 23:26:57.897516 kubelet[1619]: I0208 23:26:57.897495 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:26:57.897889 kubelet[1619]: E0208 23:26:57.897867 1619 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 8 23:26:57.937044 kubelet[1619]: I0208 23:26:57.937003 1619 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:26:57.938147 kubelet[1619]: I0208 23:26:57.938110 1619 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:26:57.938850 kubelet[1619]: I0208 23:26:57.938831 1619 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:26:57.939863 kubelet[1619]: I0208 23:26:57.939832 1619 status_manager.go:698] "Failed to get status for pod" podUID=ca03ad8bd986c56d042acb1c93b7f35b pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.113:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.113:6443: connect: connection refused" Feb 8 23:26:57.940082 kubelet[1619]: I0208 23:26:57.940033 1619 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.113:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.113:6443: connect: connection refused" Feb 8 23:26:57.940625 kubelet[1619]: I0208 23:26:57.940609 1619 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.113:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.113:6443: connect: connection refused" Feb 8 23:26:57.943161 systemd[1]: Created slice kubepods-burstable-podca03ad8bd986c56d042acb1c93b7f35b.slice. Feb 8 23:26:57.955962 systemd[1]: Created slice kubepods-burstable-pod550020dd9f101bcc23e1d3c651841c4d.slice. Feb 8 23:26:57.966769 systemd[1]: Created slice kubepods-burstable-pod72ae17a74a2eae76daac6d298477aff0.slice. Feb 8 23:26:57.996466 kubelet[1619]: I0208 23:26:57.996441 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:26:57.996539 kubelet[1619]: I0208 23:26:57.996479 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:26:57.996539 kubelet[1619]: E0208 23:26:57.996467 1619 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:57.996539 kubelet[1619]: I0208 23:26:57.996499 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:26:57.996615 kubelet[1619]: I0208 23:26:57.996559 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:26:57.996615 kubelet[1619]: I0208 23:26:57.996591 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:26:57.996659 kubelet[1619]: I0208 23:26:57.996622 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:26:57.996659 kubelet[1619]: I0208 23:26:57.996644 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:26:57.996659 kubelet[1619]: I0208 23:26:57.996660 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:26:57.996729 kubelet[1619]: I0208 23:26:57.996679 1619 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:26:58.099237 kubelet[1619]: I0208 23:26:58.099105 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:26:58.099413 kubelet[1619]: E0208 23:26:58.099398 1619 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 8 23:26:58.254216 kubelet[1619]: E0208 23:26:58.254157 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:26:58.254870 env[1117]: time="2024-02-08T23:26:58.254827148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca03ad8bd986c56d042acb1c93b7f35b,Namespace:kube-system,Attempt:0,}" Feb 8 23:26:58.266338 kubelet[1619]: E0208 23:26:58.266260 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:26:58.266940 env[1117]: time="2024-02-08T23:26:58.266697564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 8 23:26:58.268131 kubelet[1619]: E0208 23:26:58.268108 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:26:58.268340 env[1117]: time="2024-02-08T23:26:58.268307194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 8 23:26:58.397640 kubelet[1619]: E0208 23:26:58.397508 1619 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.500992 kubelet[1619]: I0208 23:26:58.500952 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:26:58.501327 kubelet[1619]: E0208 23:26:58.501306 1619 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 8 23:26:58.700711 kubelet[1619]: W0208 23:26:58.700620 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.700711 kubelet[1619]: E0208 23:26:58.700707 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.785715 kubelet[1619]: W0208 23:26:58.785642 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.785715 kubelet[1619]: E0208 23:26:58.785718 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.818169 kubelet[1619]: W0208 23:26:58.818114 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:58.818169 kubelet[1619]: E0208 23:26:58.818158 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:59.006664 kubelet[1619]: W0208 23:26:59.006551 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:59.006664 kubelet[1619]: E0208 23:26:59.006608 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:59.198267 kubelet[1619]: E0208 23:26:59.198193 1619 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:26:59.307264 kubelet[1619]: I0208 23:26:59.306839 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:26:59.307264 kubelet[1619]: E0208 23:26:59.307214 1619 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 8 23:26:59.915437 kubelet[1619]: E0208 23:26:59.915305 1619 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:00.456369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244445437.mount: Deactivated successfully. Feb 8 23:27:00.712130 env[1117]: time="2024-02-08T23:27:00.711992087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.733948 env[1117]: time="2024-02-08T23:27:00.733887148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.755370 env[1117]: time="2024-02-08T23:27:00.755322463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.769462 env[1117]: time="2024-02-08T23:27:00.769396033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.790451 env[1117]: time="2024-02-08T23:27:00.790408597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.799066 kubelet[1619]: E0208 23:27:00.799031 1619 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:00.805187 env[1117]: time="2024-02-08T23:27:00.805134281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.812591 env[1117]: time="2024-02-08T23:27:00.812560792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.814948 env[1117]: time="2024-02-08T23:27:00.814921342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.821727 env[1117]: time="2024-02-08T23:27:00.821689816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.828133 env[1117]: time="2024-02-08T23:27:00.828103383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.830538 kubelet[1619]: W0208 23:27:00.830492 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:00.830538 kubelet[1619]: E0208 23:27:00.830530 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:00.833475 env[1117]: time="2024-02-08T23:27:00.833442438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.857426 env[1117]: time="2024-02-08T23:27:00.857389623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:00.908754 kubelet[1619]: I0208 23:27:00.908727 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:27:00.909025 kubelet[1619]: E0208 23:27:00.909000 1619 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 8 23:27:00.994953 kubelet[1619]: W0208 23:27:00.994764 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:00.994953 kubelet[1619]: E0208 23:27:00.994842 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:01.043226 env[1117]: time="2024-02-08T23:27:01.043162787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:01.043226 env[1117]: time="2024-02-08T23:27:01.043198466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:01.043226 env[1117]: time="2024-02-08T23:27:01.043208591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:01.043414 env[1117]: time="2024-02-08T23:27:01.043373161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ff8d59fdec34a6e6db7f7f0d7ee9d770dc26526b42ad5bd6cf307b6cb8e0602 pid=1696 runtime=io.containerd.runc.v2 Feb 8 23:27:01.055171 systemd[1]: Started cri-containerd-5ff8d59fdec34a6e6db7f7f0d7ee9d770dc26526b42ad5bd6cf307b6cb8e0602.scope. Feb 8 23:27:01.086890 env[1117]: time="2024-02-08T23:27:01.086825826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:01.088189 env[1117]: time="2024-02-08T23:27:01.088065384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:01.088189 env[1117]: time="2024-02-08T23:27:01.088136622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:01.089755 env[1117]: time="2024-02-08T23:27:01.088434634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41197f90f432c5c2f6f6fac8664acea116fb0ef9974135702a953a7aae0239df pid=1731 runtime=io.containerd.runc.v2 Feb 8 23:27:01.094653 env[1117]: time="2024-02-08T23:27:01.094450927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:01.094653 env[1117]: time="2024-02-08T23:27:01.094503006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:01.094653 env[1117]: time="2024-02-08T23:27:01.094515638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:01.099533 env[1117]: time="2024-02-08T23:27:01.099472912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca03ad8bd986c56d042acb1c93b7f35b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ff8d59fdec34a6e6db7f7f0d7ee9d770dc26526b42ad5bd6cf307b6cb8e0602\"" Feb 8 23:27:01.100671 kubelet[1619]: E0208 23:27:01.100650 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:01.100996 env[1117]: time="2024-02-08T23:27:01.094872046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88afdacb339251f06c09603a66b46cb121bd5d7fb158796b13efa61336f329cc pid=1751 runtime=io.containerd.runc.v2 Feb 8 23:27:01.107187 env[1117]: time="2024-02-08T23:27:01.107134253Z" level=info msg="CreateContainer within sandbox \"5ff8d59fdec34a6e6db7f7f0d7ee9d770dc26526b42ad5bd6cf307b6cb8e0602\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:27:01.111601 systemd[1]: Started cri-containerd-88afdacb339251f06c09603a66b46cb121bd5d7fb158796b13efa61336f329cc.scope. Feb 8 23:27:01.115679 systemd[1]: Started cri-containerd-41197f90f432c5c2f6f6fac8664acea116fb0ef9974135702a953a7aae0239df.scope. Feb 8 23:27:01.194875 env[1117]: time="2024-02-08T23:27:01.194829560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"41197f90f432c5c2f6f6fac8664acea116fb0ef9974135702a953a7aae0239df\"" Feb 8 23:27:01.197107 env[1117]: time="2024-02-08T23:27:01.196978836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"88afdacb339251f06c09603a66b46cb121bd5d7fb158796b13efa61336f329cc\"" Feb 8 23:27:01.201752 kubelet[1619]: E0208 23:27:01.198471 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:01.201752 kubelet[1619]: E0208 23:27:01.200221 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:01.201951 env[1117]: time="2024-02-08T23:27:01.201546028Z" level=info msg="CreateContainer within sandbox \"88afdacb339251f06c09603a66b46cb121bd5d7fb158796b13efa61336f329cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:27:01.201951 env[1117]: time="2024-02-08T23:27:01.201935698Z" level=info msg="CreateContainer within sandbox \"41197f90f432c5c2f6f6fac8664acea116fb0ef9974135702a953a7aae0239df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:27:01.267263 kubelet[1619]: W0208 23:27:01.267154 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:01.267263 kubelet[1619]: E0208 23:27:01.267190 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:01.282701 kubelet[1619]: W0208 23:27:01.282640 1619 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:01.282701 kubelet[1619]: E0208 23:27:01.282696 1619 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 8 23:27:01.571278 env[1117]: time="2024-02-08T23:27:01.571103564Z" level=info msg="CreateContainer within sandbox \"5ff8d59fdec34a6e6db7f7f0d7ee9d770dc26526b42ad5bd6cf307b6cb8e0602\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6cb9a4132abb4ab2998cadc447fb4b5b8b6dbefadde56a72f1f88ee233fe1aac\"" Feb 8 23:27:01.571869 env[1117]: time="2024-02-08T23:27:01.571833402Z" level=info msg="StartContainer for \"6cb9a4132abb4ab2998cadc447fb4b5b8b6dbefadde56a72f1f88ee233fe1aac\"" Feb 8 23:27:01.585686 systemd[1]: Started cri-containerd-6cb9a4132abb4ab2998cadc447fb4b5b8b6dbefadde56a72f1f88ee233fe1aac.scope. Feb 8 23:27:01.636705 env[1117]: time="2024-02-08T23:27:01.636652401Z" level=info msg="CreateContainer within sandbox \"88afdacb339251f06c09603a66b46cb121bd5d7fb158796b13efa61336f329cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"308f26c8e1b4b1f4ab0a56f9620fc8506529d23c57cdaef4a7a21aa4477033a6\"" Feb 8 23:27:01.637201 env[1117]: time="2024-02-08T23:27:01.637152036Z" level=info msg="StartContainer for \"308f26c8e1b4b1f4ab0a56f9620fc8506529d23c57cdaef4a7a21aa4477033a6\"" Feb 8 23:27:01.637499 env[1117]: time="2024-02-08T23:27:01.637471241Z" level=info msg="StartContainer for \"6cb9a4132abb4ab2998cadc447fb4b5b8b6dbefadde56a72f1f88ee233fe1aac\" returns successfully" Feb 8 23:27:01.637754 env[1117]: time="2024-02-08T23:27:01.637672904Z" level=info msg="CreateContainer within sandbox \"41197f90f432c5c2f6f6fac8664acea116fb0ef9974135702a953a7aae0239df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8546defbac79e4923082325c5e09a3c70a6c00f74845eed93f2f09708587cfc1\"" Feb 8 23:27:01.638677 env[1117]: time="2024-02-08T23:27:01.638644877Z" level=info msg="StartContainer for \"8546defbac79e4923082325c5e09a3c70a6c00f74845eed93f2f09708587cfc1\"" Feb 8 23:27:01.659221 systemd[1]: Started cri-containerd-308f26c8e1b4b1f4ab0a56f9620fc8506529d23c57cdaef4a7a21aa4477033a6.scope. Feb 8 23:27:01.660145 systemd[1]: Started cri-containerd-8546defbac79e4923082325c5e09a3c70a6c00f74845eed93f2f09708587cfc1.scope. Feb 8 23:27:01.706169 env[1117]: time="2024-02-08T23:27:01.706100005Z" level=info msg="StartContainer for \"308f26c8e1b4b1f4ab0a56f9620fc8506529d23c57cdaef4a7a21aa4477033a6\" returns successfully" Feb 8 23:27:01.709824 env[1117]: time="2024-02-08T23:27:01.709778030Z" level=info msg="StartContainer for \"8546defbac79e4923082325c5e09a3c70a6c00f74845eed93f2f09708587cfc1\" returns successfully" Feb 8 23:27:01.856354 kubelet[1619]: E0208 23:27:01.856265 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:01.858278 kubelet[1619]: E0208 23:27:01.858261 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:01.859776 kubelet[1619]: E0208 23:27:01.859762 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:02.861999 kubelet[1619]: E0208 23:27:02.861956 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:02.862329 kubelet[1619]: E0208 23:27:02.862277 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:02.862699 kubelet[1619]: E0208 23:27:02.862680 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:03.228338 kubelet[1619]: E0208 23:27:03.228236 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eb9f8cf182", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 790734722, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 790734722, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.281615 kubelet[1619]: E0208 23:27:03.281557 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eb9fa2f2a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 792176803, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 792176803, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.334938 kubelet[1619]: E0208 23:27:03.334857 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca4ab5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811532469, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811532469, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.388791 kubelet[1619]: E0208 23:27:03.388664 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca7fdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811546079, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811546079, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.425388 kubelet[1619]: E0208 23:27:03.425335 1619 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 8 23:27:03.443211 kubelet[1619]: E0208 23:27:03.443094 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca8e22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811549730, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811549730, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.496188 kubelet[1619]: E0208 23:27:03.495987 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba38bea3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 857776189, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 857776189, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.552170 kubelet[1619]: E0208 23:27:03.552051 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca4ab5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811532469, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 897446900, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.608773 kubelet[1619]: E0208 23:27:03.608659 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca7fdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811546079, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 897456158, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.664515 kubelet[1619]: E0208 23:27:03.664409 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca8e22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node localhost status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811549730, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 897471572, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:03.863217 kubelet[1619]: E0208 23:27:03.863103 1619 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:03.882887 kubelet[1619]: E0208 23:27:03.882801 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca4ab5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node localhost status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811532469, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 938040171, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:04.003944 kubelet[1619]: E0208 23:27:04.003889 1619 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 8 23:27:04.111529 kubelet[1619]: I0208 23:27:04.111487 1619 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:27:04.282062 kubelet[1619]: E0208 23:27:04.281931 1619 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b206eba0ca7fdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node localhost status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 811546079, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 57, 938057171, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:27:04.489368 kubelet[1619]: I0208 23:27:04.489332 1619 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:27:04.496096 kubelet[1619]: E0208 23:27:04.496078 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:04.597310 kubelet[1619]: E0208 23:27:04.597186 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:04.697377 kubelet[1619]: E0208 23:27:04.697342 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:04.798175 kubelet[1619]: E0208 23:27:04.798145 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:04.898883 kubelet[1619]: E0208 23:27:04.898761 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:04.999318 kubelet[1619]: E0208 23:27:04.999265 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:05.099880 kubelet[1619]: E0208 23:27:05.099837 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:05.200324 kubelet[1619]: E0208 23:27:05.200285 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:05.301243 kubelet[1619]: E0208 23:27:05.301206 1619 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 8 23:27:05.794544 kubelet[1619]: I0208 23:27:05.794496 1619 apiserver.go:52] "Watching apiserver" Feb 8 23:27:05.796529 kubelet[1619]: I0208 23:27:05.796487 1619 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:27:05.865876 kubelet[1619]: I0208 23:27:05.865834 1619 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:27:05.892578 systemd[1]: Reloading. Feb 8 23:27:05.945806 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-02-08T23:27:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:27:05.945835 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-02-08T23:27:05Z" level=info msg="torcx already run" Feb 8 23:27:06.004667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:27:06.004682 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:27:06.024053 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:27:06.110633 systemd[1]: Stopping kubelet.service... Feb 8 23:27:06.129245 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:27:06.129409 systemd[1]: Stopped kubelet.service. Feb 8 23:27:06.130825 systemd[1]: Started kubelet.service. Feb 8 23:27:06.183017 kubelet[1998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:27:06.183017 kubelet[1998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:06.183349 kubelet[1998]: I0208 23:27:06.183048 1998 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:27:06.184237 kubelet[1998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:27:06.184237 kubelet[1998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:06.186808 kubelet[1998]: I0208 23:27:06.186780 1998 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:27:06.186808 kubelet[1998]: I0208 23:27:06.186807 1998 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:27:06.187032 kubelet[1998]: I0208 23:27:06.187023 1998 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:27:06.188281 kubelet[1998]: I0208 23:27:06.188262 1998 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:27:06.188840 kubelet[1998]: I0208 23:27:06.188810 1998 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:27:06.192295 kubelet[1998]: I0208 23:27:06.192275 1998 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:27:06.192458 kubelet[1998]: I0208 23:27:06.192440 1998 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:27:06.192515 kubelet[1998]: I0208 23:27:06.192497 1998 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:27:06.192596 kubelet[1998]: I0208 23:27:06.192522 1998 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:27:06.192596 kubelet[1998]: I0208 23:27:06.192533 1998 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:27:06.192596 kubelet[1998]: I0208 23:27:06.192558 1998 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:06.196710 kubelet[1998]: I0208 23:27:06.196676 1998 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:27:06.196824 kubelet[1998]: I0208 23:27:06.196705 1998 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:27:06.196879 kubelet[1998]: I0208 23:27:06.196853 1998 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:27:06.196879 kubelet[1998]: I0208 23:27:06.196878 1998 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:27:06.198134 kubelet[1998]: I0208 23:27:06.198044 1998 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:27:06.198794 kubelet[1998]: I0208 23:27:06.198773 1998 server.go:1186] "Started kubelet" Feb 8 23:27:06.198994 kubelet[1998]: I0208 23:27:06.198966 1998 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:27:06.199619 kubelet[1998]: I0208 23:27:06.199605 1998 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:27:06.205848 kubelet[1998]: I0208 23:27:06.201861 1998 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:27:06.205848 kubelet[1998]: I0208 23:27:06.204049 1998 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:27:06.205848 kubelet[1998]: I0208 23:27:06.205018 1998 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:27:06.207387 kubelet[1998]: E0208 23:27:06.207364 1998 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:27:06.207440 kubelet[1998]: E0208 23:27:06.207414 1998 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:27:06.223269 kubelet[1998]: I0208 23:27:06.223225 1998 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:27:06.240463 kubelet[1998]: I0208 23:27:06.240436 1998 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:27:06.240463 kubelet[1998]: I0208 23:27:06.240457 1998 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:27:06.240616 kubelet[1998]: I0208 23:27:06.240474 1998 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:27:06.240616 kubelet[1998]: E0208 23:27:06.240530 1998 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:27:06.245048 kubelet[1998]: I0208 23:27:06.245031 1998 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:27:06.245048 kubelet[1998]: I0208 23:27:06.245048 1998 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:27:06.245144 kubelet[1998]: I0208 23:27:06.245063 1998 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:06.245207 kubelet[1998]: I0208 23:27:06.245194 1998 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:27:06.245254 kubelet[1998]: I0208 23:27:06.245210 1998 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:27:06.245254 kubelet[1998]: I0208 23:27:06.245216 1998 policy_none.go:49] "None policy: Start" Feb 8 23:27:06.245631 kubelet[1998]: I0208 23:27:06.245622 1998 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:27:06.245705 kubelet[1998]: I0208 23:27:06.245692 1998 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:27:06.245873 kubelet[1998]: I0208 23:27:06.245861 1998 state_mem.go:75] "Updated machine memory state" Feb 8 23:27:06.249192 kubelet[1998]: I0208 23:27:06.249088 1998 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:27:06.249469 kubelet[1998]: I0208 23:27:06.249459 1998 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:27:06.279307 sudo[2050]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:27:06.279530 sudo[2050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:27:06.307708 kubelet[1998]: I0208 23:27:06.307674 1998 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 8 23:27:06.315375 kubelet[1998]: I0208 23:27:06.315340 1998 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 8 23:27:06.315463 kubelet[1998]: I0208 23:27:06.315417 1998 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 8 23:27:06.341309 kubelet[1998]: I0208 23:27:06.341270 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:06.341430 kubelet[1998]: I0208 23:27:06.341345 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:06.341430 kubelet[1998]: I0208 23:27:06.341374 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:06.505992 kubelet[1998]: I0208 23:27:06.505946 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:06.505992 kubelet[1998]: I0208 23:27:06.505992 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:06.506178 kubelet[1998]: I0208 23:27:06.506013 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 8 23:27:06.506178 kubelet[1998]: I0208 23:27:06.506035 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:27:06.506178 kubelet[1998]: I0208 23:27:06.506088 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:27:06.506247 kubelet[1998]: I0208 23:27:06.506157 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:06.506247 kubelet[1998]: I0208 23:27:06.506230 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca03ad8bd986c56d042acb1c93b7f35b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca03ad8bd986c56d042acb1c93b7f35b\") " pod="kube-system/kube-apiserver-localhost" Feb 8 23:27:06.506290 kubelet[1998]: I0208 23:27:06.506256 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:06.506290 kubelet[1998]: I0208 23:27:06.506275 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:06.647233 kubelet[1998]: E0208 23:27:06.647205 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:06.702677 kubelet[1998]: E0208 23:27:06.702634 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:06.741691 sudo[2050]: pam_unix(sudo:session): session closed for user root Feb 8 23:27:06.902416 kubelet[1998]: E0208 23:27:06.902294 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:07.197620 kubelet[1998]: I0208 23:27:07.197564 1998 apiserver.go:52] "Watching apiserver" Feb 8 23:27:07.405858 kubelet[1998]: I0208 23:27:07.405806 1998 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:27:07.411768 kubelet[1998]: I0208 23:27:07.411729 1998 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:27:07.600791 kubelet[1998]: E0208 23:27:07.600686 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 8 23:27:07.601200 kubelet[1998]: E0208 23:27:07.601183 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:07.950328 sudo[1213]: pam_unix(sudo:session): session closed for user root Feb 8 23:27:07.951461 sshd[1210]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:07.954226 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:60382.service: Deactivated successfully. Feb 8 23:27:07.954992 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:27:07.955164 systemd[1]: session-5.scope: Consumed 4.193s CPU time. Feb 8 23:27:07.955528 systemd-logind[1104]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:27:07.956297 systemd-logind[1104]: Removed session 5. Feb 8 23:27:08.001872 kubelet[1998]: E0208 23:27:08.001836 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 8 23:27:08.002161 kubelet[1998]: E0208 23:27:08.002131 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:08.201632 kubelet[1998]: E0208 23:27:08.201521 1998 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 8 23:27:08.202055 kubelet[1998]: E0208 23:27:08.201899 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:08.247764 kubelet[1998]: E0208 23:27:08.247741 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:08.247764 kubelet[1998]: E0208 23:27:08.247761 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:08.247998 kubelet[1998]: E0208 23:27:08.247873 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:08.404454 kubelet[1998]: I0208 23:27:08.404422 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.404369518 pod.CreationTimestamp="2024-02-08 23:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:08.404355747 +0000 UTC m=+2.270364879" watchObservedRunningTime="2024-02-08 23:27:08.404369518 +0000 UTC m=+2.270378630" Feb 8 23:27:08.802314 kubelet[1998]: I0208 23:27:08.802274 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.802225591 pod.CreationTimestamp="2024-02-08 23:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:08.801883169 +0000 UTC m=+2.667892292" watchObservedRunningTime="2024-02-08 23:27:08.802225591 +0000 UTC m=+2.668234713" Feb 8 23:27:09.974885 kubelet[1998]: E0208 23:27:09.974842 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:11.480246 kubelet[1998]: E0208 23:27:11.480195 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:11.491158 kubelet[1998]: I0208 23:27:11.491055 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.491015021 pod.CreationTimestamp="2024-02-08 23:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:09.2027384 +0000 UTC m=+3.068747522" watchObservedRunningTime="2024-02-08 23:27:11.491015021 +0000 UTC m=+5.357024143" Feb 8 23:27:12.252344 kubelet[1998]: E0208 23:27:12.252312 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:13.254010 kubelet[1998]: E0208 23:27:13.253965 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:16.836046 kubelet[1998]: E0208 23:27:16.836004 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:18.142091 update_engine[1108]: I0208 23:27:18.142040 1108 update_attempter.cc:509] Updating boot flags... Feb 8 23:27:19.582059 kubelet[1998]: I0208 23:27:19.582024 1998 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:27:19.582991 env[1117]: time="2024-02-08T23:27:19.582929553Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:27:19.583265 kubelet[1998]: I0208 23:27:19.583125 1998 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:27:19.703305 kubelet[1998]: I0208 23:27:19.703260 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:19.707577 systemd[1]: Created slice kubepods-besteffort-podf25a2b99_71dc_42f2_9a41_c96845500443.slice. Feb 8 23:27:19.801888 kubelet[1998]: I0208 23:27:19.801845 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f25a2b99-71dc-42f2-9a41-c96845500443-xtables-lock\") pod \"kube-proxy-5znxv\" (UID: \"f25a2b99-71dc-42f2-9a41-c96845500443\") " pod="kube-system/kube-proxy-5znxv" Feb 8 23:27:19.801888 kubelet[1998]: I0208 23:27:19.801889 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f25a2b99-71dc-42f2-9a41-c96845500443-kube-proxy\") pod \"kube-proxy-5znxv\" (UID: \"f25a2b99-71dc-42f2-9a41-c96845500443\") " pod="kube-system/kube-proxy-5znxv" Feb 8 23:27:19.802141 kubelet[1998]: I0208 23:27:19.801921 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f25a2b99-71dc-42f2-9a41-c96845500443-lib-modules\") pod \"kube-proxy-5znxv\" (UID: \"f25a2b99-71dc-42f2-9a41-c96845500443\") " pod="kube-system/kube-proxy-5znxv" Feb 8 23:27:19.802141 kubelet[1998]: I0208 23:27:19.801996 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkdj\" (UniqueName: \"kubernetes.io/projected/f25a2b99-71dc-42f2-9a41-c96845500443-kube-api-access-cxkdj\") pod \"kube-proxy-5znxv\" (UID: \"f25a2b99-71dc-42f2-9a41-c96845500443\") " pod="kube-system/kube-proxy-5znxv" Feb 8 23:27:19.946439 kubelet[1998]: I0208 23:27:19.946408 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:19.951166 systemd[1]: Created slice kubepods-burstable-podd33cec7f_32e9_4b1c_8d4f_66de69b84b76.slice. Feb 8 23:27:19.984599 kubelet[1998]: E0208 23:27:19.984555 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:20.003379 kubelet[1998]: I0208 23:27:20.003318 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cni-path\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003379 kubelet[1998]: I0208 23:27:20.003377 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-xtables-lock\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003528 kubelet[1998]: I0208 23:27:20.003419 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjxsp\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-kube-api-access-tjxsp\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003528 kubelet[1998]: I0208 23:27:20.003444 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-lib-modules\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003528 kubelet[1998]: I0208 23:27:20.003492 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-net\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003630 kubelet[1998]: I0208 23:27:20.003547 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-run\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003630 kubelet[1998]: I0208 23:27:20.003574 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-cgroup\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003630 kubelet[1998]: I0208 23:27:20.003605 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-etc-cni-netd\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003630 kubelet[1998]: I0208 23:27:20.003626 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hubble-tls\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003750 kubelet[1998]: I0208 23:27:20.003649 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-bpf-maps\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003750 kubelet[1998]: I0208 23:27:20.003705 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hostproc\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003750 kubelet[1998]: I0208 23:27:20.003736 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-clustermesh-secrets\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003845 kubelet[1998]: I0208 23:27:20.003758 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-config-path\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.003845 kubelet[1998]: I0208 23:27:20.003839 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-kernel\") pod \"cilium-lnsrk\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " pod="kube-system/cilium-lnsrk" Feb 8 23:27:20.191208 kubelet[1998]: I0208 23:27:20.189326 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:20.199159 systemd[1]: Created slice kubepods-besteffort-pod22ca0bd5_1fbc_4eeb_b899_2047e1189cc8.slice. Feb 8 23:27:20.205905 kubelet[1998]: I0208 23:27:20.205864 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-slwdw\" (UID: \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\") " pod="kube-system/cilium-operator-f59cbd8c6-slwdw" Feb 8 23:27:20.206065 kubelet[1998]: I0208 23:27:20.205921 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh68\" (UniqueName: \"kubernetes.io/projected/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-kube-api-access-bmh68\") pod \"cilium-operator-f59cbd8c6-slwdw\" (UID: \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\") " pod="kube-system/cilium-operator-f59cbd8c6-slwdw" Feb 8 23:27:20.317548 kubelet[1998]: E0208 23:27:20.317507 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:20.318263 env[1117]: time="2024-02-08T23:27:20.318222536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5znxv,Uid:f25a2b99-71dc-42f2-9a41-c96845500443,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:20.502659 kubelet[1998]: E0208 23:27:20.502534 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:20.503058 env[1117]: time="2024-02-08T23:27:20.503015919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-slwdw,Uid:22ca0bd5-1fbc-4eeb-b899-2047e1189cc8,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:20.555539 kubelet[1998]: E0208 23:27:20.555502 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:20.556074 env[1117]: time="2024-02-08T23:27:20.556021611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnsrk,Uid:d33cec7f-32e9-4b1c-8d4f-66de69b84b76,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:20.895207 env[1117]: time="2024-02-08T23:27:20.895073108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:20.895207 env[1117]: time="2024-02-08T23:27:20.895117249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:20.895207 env[1117]: time="2024-02-08T23:27:20.895133763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:20.895529 env[1117]: time="2024-02-08T23:27:20.895402713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea55c0102df20308c0213066478276175dcefcd1f3a37573ef2fc566784d7744 pid=2127 runtime=io.containerd.runc.v2 Feb 8 23:27:20.905479 systemd[1]: Started cri-containerd-ea55c0102df20308c0213066478276175dcefcd1f3a37573ef2fc566784d7744.scope. Feb 8 23:27:20.927164 env[1117]: time="2024-02-08T23:27:20.927126643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5znxv,Uid:f25a2b99-71dc-42f2-9a41-c96845500443,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea55c0102df20308c0213066478276175dcefcd1f3a37573ef2fc566784d7744\"" Feb 8 23:27:20.929295 kubelet[1998]: E0208 23:27:20.927892 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:20.931927 env[1117]: time="2024-02-08T23:27:20.931892030Z" level=info msg="CreateContainer within sandbox \"ea55c0102df20308c0213066478276175dcefcd1f3a37573ef2fc566784d7744\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:27:21.757269 env[1117]: time="2024-02-08T23:27:21.757173640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:21.757269 env[1117]: time="2024-02-08T23:27:21.757238224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:21.757269 env[1117]: time="2024-02-08T23:27:21.757253276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:21.757530 env[1117]: time="2024-02-08T23:27:21.757475083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c pid=2168 runtime=io.containerd.runc.v2 Feb 8 23:27:21.773196 systemd[1]: run-containerd-runc-k8s.io-80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c-runc.LAlqtA.mount: Deactivated successfully. Feb 8 23:27:21.775279 systemd[1]: Started cri-containerd-80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c.scope. Feb 8 23:27:21.807744 env[1117]: time="2024-02-08T23:27:21.807701226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-slwdw,Uid:22ca0bd5-1fbc-4eeb-b899-2047e1189cc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:27:21.808297 kubelet[1998]: E0208 23:27:21.808277 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:21.809273 env[1117]: time="2024-02-08T23:27:21.809230628Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:27:21.846034 env[1117]: time="2024-02-08T23:27:21.845937349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:21.846034 env[1117]: time="2024-02-08T23:27:21.846000039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:21.846034 env[1117]: time="2024-02-08T23:27:21.846013476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:21.846924 env[1117]: time="2024-02-08T23:27:21.846860037Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b pid=2208 runtime=io.containerd.runc.v2 Feb 8 23:27:21.856469 systemd[1]: Started cri-containerd-d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b.scope. Feb 8 23:27:21.881419 env[1117]: time="2024-02-08T23:27:21.881376729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnsrk,Uid:d33cec7f-32e9-4b1c-8d4f-66de69b84b76,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:27:21.881910 kubelet[1998]: E0208 23:27:21.881889 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:22.231176 env[1117]: time="2024-02-08T23:27:22.231087222Z" level=info msg="CreateContainer within sandbox \"ea55c0102df20308c0213066478276175dcefcd1f3a37573ef2fc566784d7744\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"513065f03780224480666cdee438650fbad380ffac456e6050a5960c03e1d7f0\"" Feb 8 23:27:22.231844 env[1117]: time="2024-02-08T23:27:22.231796401Z" level=info msg="StartContainer for \"513065f03780224480666cdee438650fbad380ffac456e6050a5960c03e1d7f0\"" Feb 8 23:27:22.249538 systemd[1]: Started cri-containerd-513065f03780224480666cdee438650fbad380ffac456e6050a5960c03e1d7f0.scope. Feb 8 23:27:22.374370 env[1117]: time="2024-02-08T23:27:22.374298484Z" level=info msg="StartContainer for \"513065f03780224480666cdee438650fbad380ffac456e6050a5960c03e1d7f0\" returns successfully" Feb 8 23:27:23.276166 kubelet[1998]: E0208 23:27:23.276137 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:23.381836 kubelet[1998]: I0208 23:27:23.381790 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5znxv" podStartSLOduration=4.381747688 pod.CreationTimestamp="2024-02-08 23:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:23.381467143 +0000 UTC m=+17.247476265" watchObservedRunningTime="2024-02-08 23:27:23.381747688 +0000 UTC m=+17.247756810" Feb 8 23:27:24.277613 kubelet[1998]: E0208 23:27:24.277580 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:24.983174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200963782.mount: Deactivated successfully. Feb 8 23:27:25.620264 env[1117]: time="2024-02-08T23:27:25.620211281Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.621950 env[1117]: time="2024-02-08T23:27:25.621908214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.623777 env[1117]: time="2024-02-08T23:27:25.623732617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.624318 env[1117]: time="2024-02-08T23:27:25.624263868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:27:25.625065 env[1117]: time="2024-02-08T23:27:25.624930164Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:27:25.626539 env[1117]: time="2024-02-08T23:27:25.626505289Z" level=info msg="CreateContainer within sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:27:25.638920 env[1117]: time="2024-02-08T23:27:25.638861505Z" level=info msg="CreateContainer within sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\"" Feb 8 23:27:25.639430 env[1117]: time="2024-02-08T23:27:25.639384700Z" level=info msg="StartContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\"" Feb 8 23:27:25.655476 systemd[1]: Started cri-containerd-d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6.scope. Feb 8 23:27:25.679248 env[1117]: time="2024-02-08T23:27:25.679204206Z" level=info msg="StartContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" returns successfully" Feb 8 23:27:26.288840 kubelet[1998]: E0208 23:27:26.288809 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:27.290233 kubelet[1998]: E0208 23:27:27.290193 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:32.063776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737771838.mount: Deactivated successfully. Feb 8 23:27:35.356806 env[1117]: time="2024-02-08T23:27:35.356754781Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.358430 env[1117]: time="2024-02-08T23:27:35.358377268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.359728 env[1117]: time="2024-02-08T23:27:35.359704501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.360127 env[1117]: time="2024-02-08T23:27:35.360100054Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:27:35.361700 env[1117]: time="2024-02-08T23:27:35.361669827Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:27:35.370403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257879674.mount: Deactivated successfully. Feb 8 23:27:35.371687 env[1117]: time="2024-02-08T23:27:35.371651558Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\"" Feb 8 23:27:35.372130 env[1117]: time="2024-02-08T23:27:35.372056972Z" level=info msg="StartContainer for \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\"" Feb 8 23:27:35.388028 systemd[1]: Started cri-containerd-efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12.scope. Feb 8 23:27:35.409420 env[1117]: time="2024-02-08T23:27:35.409374574Z" level=info msg="StartContainer for \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\" returns successfully" Feb 8 23:27:35.417329 systemd[1]: cri-containerd-efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12.scope: Deactivated successfully. Feb 8 23:27:35.791648 env[1117]: time="2024-02-08T23:27:35.791600020Z" level=info msg="shim disconnected" id=efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12 Feb 8 23:27:35.791648 env[1117]: time="2024-02-08T23:27:35.791642584Z" level=warning msg="cleaning up after shim disconnected" id=efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12 namespace=k8s.io Feb 8 23:27:35.791648 env[1117]: time="2024-02-08T23:27:35.791650700Z" level=info msg="cleaning up dead shim" Feb 8 23:27:35.797464 env[1117]: time="2024-02-08T23:27:35.797406313Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:27:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2474 runtime=io.containerd.runc.v2\n" Feb 8 23:27:36.305009 kubelet[1998]: E0208 23:27:36.304948 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:36.307007 env[1117]: time="2024-02-08T23:27:36.306923218Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:27:36.319031 kubelet[1998]: I0208 23:27:36.318390 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-slwdw" podStartSLOduration=-9.223372020536417e+09 pod.CreationTimestamp="2024-02-08 23:27:20 +0000 UTC" firstStartedPulling="2024-02-08 23:27:21.808872678 +0000 UTC m=+15.674881800" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:26.296360424 +0000 UTC m=+20.162369566" watchObservedRunningTime="2024-02-08 23:27:36.31835854 +0000 UTC m=+30.184367662" Feb 8 23:27:36.323912 env[1117]: time="2024-02-08T23:27:36.323869988Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\"" Feb 8 23:27:36.324385 env[1117]: time="2024-02-08T23:27:36.324348805Z" level=info msg="StartContainer for \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\"" Feb 8 23:27:36.339359 systemd[1]: Started cri-containerd-aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60.scope. Feb 8 23:27:36.361373 env[1117]: time="2024-02-08T23:27:36.361306510Z" level=info msg="StartContainer for \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\" returns successfully" Feb 8 23:27:36.370155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12-rootfs.mount: Deactivated successfully. Feb 8 23:27:36.374242 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:27:36.374852 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:27:36.375088 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:27:36.377210 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:27:36.380155 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:27:36.383149 systemd[1]: cri-containerd-aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60.scope: Deactivated successfully. Feb 8 23:27:36.390161 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:27:36.397710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60-rootfs.mount: Deactivated successfully. Feb 8 23:27:36.407626 env[1117]: time="2024-02-08T23:27:36.407578808Z" level=info msg="shim disconnected" id=aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60 Feb 8 23:27:36.407807 env[1117]: time="2024-02-08T23:27:36.407628267Z" level=warning msg="cleaning up after shim disconnected" id=aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60 namespace=k8s.io Feb 8 23:27:36.407807 env[1117]: time="2024-02-08T23:27:36.407642475Z" level=info msg="cleaning up dead shim" Feb 8 23:27:36.415202 env[1117]: time="2024-02-08T23:27:36.415139783Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:27:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2542 runtime=io.containerd.runc.v2\n" Feb 8 23:27:37.307629 kubelet[1998]: E0208 23:27:37.307598 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:37.309797 env[1117]: time="2024-02-08T23:27:37.309754808Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:27:37.326334 env[1117]: time="2024-02-08T23:27:37.326285919Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\"" Feb 8 23:27:37.326872 env[1117]: time="2024-02-08T23:27:37.326849994Z" level=info msg="StartContainer for \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\"" Feb 8 23:27:37.340315 systemd[1]: Started cri-containerd-3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95.scope. Feb 8 23:27:37.365345 env[1117]: time="2024-02-08T23:27:37.365295303Z" level=info msg="StartContainer for \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\" returns successfully" Feb 8 23:27:37.365368 systemd[1]: cri-containerd-3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95.scope: Deactivated successfully. Feb 8 23:27:37.382160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95-rootfs.mount: Deactivated successfully. Feb 8 23:27:37.387962 env[1117]: time="2024-02-08T23:27:37.387908782Z" level=info msg="shim disconnected" id=3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95 Feb 8 23:27:37.387962 env[1117]: time="2024-02-08T23:27:37.387957959Z" level=warning msg="cleaning up after shim disconnected" id=3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95 namespace=k8s.io Feb 8 23:27:37.387962 env[1117]: time="2024-02-08T23:27:37.387966546Z" level=info msg="cleaning up dead shim" Feb 8 23:27:37.395458 env[1117]: time="2024-02-08T23:27:37.395409933Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:27:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2598 runtime=io.containerd.runc.v2\n" Feb 8 23:27:38.311302 kubelet[1998]: E0208 23:27:38.311269 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:38.314995 env[1117]: time="2024-02-08T23:27:38.314925157Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:27:38.327571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681444681.mount: Deactivated successfully. Feb 8 23:27:38.331160 env[1117]: time="2024-02-08T23:27:38.331115614Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\"" Feb 8 23:27:38.333092 env[1117]: time="2024-02-08T23:27:38.331633535Z" level=info msg="StartContainer for \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\"" Feb 8 23:27:38.348094 systemd[1]: Started cri-containerd-ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887.scope. Feb 8 23:27:38.372076 systemd[1]: cri-containerd-ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887.scope: Deactivated successfully. Feb 8 23:27:38.373503 env[1117]: time="2024-02-08T23:27:38.373390936Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd33cec7f_32e9_4b1c_8d4f_66de69b84b76.slice/cri-containerd-ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887.scope/memory.events\": no such file or directory" Feb 8 23:27:38.376188 env[1117]: time="2024-02-08T23:27:38.376144908Z" level=info msg="StartContainer for \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\" returns successfully" Feb 8 23:27:38.391643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887-rootfs.mount: Deactivated successfully. Feb 8 23:27:38.395872 env[1117]: time="2024-02-08T23:27:38.395825389Z" level=info msg="shim disconnected" id=ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887 Feb 8 23:27:38.396060 env[1117]: time="2024-02-08T23:27:38.395883424Z" level=warning msg="cleaning up after shim disconnected" id=ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887 namespace=k8s.io Feb 8 23:27:38.396060 env[1117]: time="2024-02-08T23:27:38.395894736Z" level=info msg="cleaning up dead shim" Feb 8 23:27:38.403034 env[1117]: time="2024-02-08T23:27:38.402966587Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:27:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2652 runtime=io.containerd.runc.v2\n" Feb 8 23:27:39.314475 kubelet[1998]: E0208 23:27:39.314447 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:39.320785 env[1117]: time="2024-02-08T23:27:39.320735530Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:27:39.347107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107778313.mount: Deactivated successfully. Feb 8 23:27:39.351163 env[1117]: time="2024-02-08T23:27:39.351110987Z" level=info msg="CreateContainer within sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\"" Feb 8 23:27:39.351717 env[1117]: time="2024-02-08T23:27:39.351647224Z" level=info msg="StartContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\"" Feb 8 23:27:39.367621 systemd[1]: Started cri-containerd-8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397.scope. Feb 8 23:27:39.397722 env[1117]: time="2024-02-08T23:27:39.397670713Z" level=info msg="StartContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" returns successfully" Feb 8 23:27:39.414850 systemd[1]: run-containerd-runc-k8s.io-8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397-runc.iZ8UcL.mount: Deactivated successfully. Feb 8 23:27:39.494831 kubelet[1998]: I0208 23:27:39.494796 1998 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:27:39.514489 kubelet[1998]: I0208 23:27:39.514443 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:39.519165 kubelet[1998]: I0208 23:27:39.518239 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:27:39.521573 systemd[1]: Created slice kubepods-burstable-pod96ff650d_ca04_4530_bc13_0a8fab02a724.slice. Feb 8 23:27:39.527246 systemd[1]: Created slice kubepods-burstable-podc51f6f03_8b36_44ff_b6fb_8783934d03e5.slice. Feb 8 23:27:39.541642 kubelet[1998]: I0208 23:27:39.541623 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kf6g\" (UniqueName: \"kubernetes.io/projected/96ff650d-ca04-4530-bc13-0a8fab02a724-kube-api-access-9kf6g\") pod \"coredns-787d4945fb-v8wf7\" (UID: \"96ff650d-ca04-4530-bc13-0a8fab02a724\") " pod="kube-system/coredns-787d4945fb-v8wf7" Feb 8 23:27:39.541773 kubelet[1998]: I0208 23:27:39.541760 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96ff650d-ca04-4530-bc13-0a8fab02a724-config-volume\") pod \"coredns-787d4945fb-v8wf7\" (UID: \"96ff650d-ca04-4530-bc13-0a8fab02a724\") " pod="kube-system/coredns-787d4945fb-v8wf7" Feb 8 23:27:39.541869 kubelet[1998]: I0208 23:27:39.541856 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2lq\" (UniqueName: \"kubernetes.io/projected/c51f6f03-8b36-44ff-b6fb-8783934d03e5-kube-api-access-ng2lq\") pod \"coredns-787d4945fb-gjh7c\" (UID: \"c51f6f03-8b36-44ff-b6fb-8783934d03e5\") " pod="kube-system/coredns-787d4945fb-gjh7c" Feb 8 23:27:39.541965 kubelet[1998]: I0208 23:27:39.541952 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f6f03-8b36-44ff-b6fb-8783934d03e5-config-volume\") pod \"coredns-787d4945fb-gjh7c\" (UID: \"c51f6f03-8b36-44ff-b6fb-8783934d03e5\") " pod="kube-system/coredns-787d4945fb-gjh7c" Feb 8 23:27:39.824907 kubelet[1998]: E0208 23:27:39.824863 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:39.825479 env[1117]: time="2024-02-08T23:27:39.825433880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v8wf7,Uid:96ff650d-ca04-4530-bc13-0a8fab02a724,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:39.829449 kubelet[1998]: E0208 23:27:39.829427 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:39.829912 env[1117]: time="2024-02-08T23:27:39.829874196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gjh7c,Uid:c51f6f03-8b36-44ff-b6fb-8783934d03e5,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:40.318992 kubelet[1998]: E0208 23:27:40.318960 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:40.330535 kubelet[1998]: I0208 23:27:40.330504 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lnsrk" podStartSLOduration=-9.223372015524303e+09 pod.CreationTimestamp="2024-02-08 23:27:19 +0000 UTC" firstStartedPulling="2024-02-08 23:27:21.883004875 +0000 UTC m=+15.749013997" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:40.330154371 +0000 UTC m=+34.196163503" watchObservedRunningTime="2024-02-08 23:27:40.330471675 +0000 UTC m=+34.196480787" Feb 8 23:27:41.302423 systemd-networkd[1019]: cilium_host: Link UP Feb 8 23:27:41.303226 systemd-networkd[1019]: cilium_net: Link UP Feb 8 23:27:41.303233 systemd-networkd[1019]: cilium_net: Gained carrier Feb 8 23:27:41.303376 systemd-networkd[1019]: cilium_host: Gained carrier Feb 8 23:27:41.310152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:27:41.309959 systemd-networkd[1019]: cilium_host: Gained IPv6LL Feb 8 23:27:41.322321 kubelet[1998]: E0208 23:27:41.322239 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:41.387882 systemd-networkd[1019]: cilium_vxlan: Link UP Feb 8 23:27:41.387896 systemd-networkd[1019]: cilium_vxlan: Gained carrier Feb 8 23:27:41.567015 kernel: NET: Registered PF_ALG protocol family Feb 8 23:27:42.075102 systemd-networkd[1019]: lxc_health: Link UP Feb 8 23:27:42.075499 systemd-networkd[1019]: lxc_health: Gained carrier Feb 8 23:27:42.075999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:27:42.270168 systemd-networkd[1019]: cilium_net: Gained IPv6LL Feb 8 23:27:42.322812 kubelet[1998]: E0208 23:27:42.322632 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:42.367719 systemd-networkd[1019]: lxcf09304d8d422: Link UP Feb 8 23:27:42.382070 kernel: eth0: renamed from tmp21979 Feb 8 23:27:42.395867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:27:42.395928 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf09304d8d422: link becomes ready Feb 8 23:27:42.395907 systemd-networkd[1019]: tmp80b4f: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:27:42.396082 systemd-networkd[1019]: tmp80b4f: Cannot enable IPv6, ignoring: No such file or directory Feb 8 23:27:42.396114 systemd-networkd[1019]: tmp80b4f: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Feb 8 23:27:42.396125 systemd-networkd[1019]: tmp80b4f: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Feb 8 23:27:42.396136 systemd-networkd[1019]: tmp80b4f: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Feb 8 23:27:42.396149 systemd-networkd[1019]: tmp80b4f: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Feb 8 23:27:42.396991 kernel: eth0: renamed from tmp80b4f Feb 8 23:27:42.401384 systemd-networkd[1019]: lxcfbde2c26c38a: Link UP Feb 8 23:27:42.401657 systemd-networkd[1019]: lxcf09304d8d422: Gained carrier Feb 8 23:27:42.401888 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Feb 8 23:27:42.402577 systemd-networkd[1019]: lxcfbde2c26c38a: Gained carrier Feb 8 23:27:42.403064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfbde2c26c38a: link becomes ready Feb 8 23:27:43.324390 kubelet[1998]: E0208 23:27:43.324366 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:43.550147 systemd-networkd[1019]: lxc_health: Gained IPv6LL Feb 8 23:27:44.126135 systemd-networkd[1019]: lxcf09304d8d422: Gained IPv6LL Feb 8 23:27:44.190073 systemd-networkd[1019]: lxcfbde2c26c38a: Gained IPv6LL Feb 8 23:27:44.326667 kubelet[1998]: E0208 23:27:44.326639 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:45.703128 env[1117]: time="2024-02-08T23:27:45.703028015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:45.703128 env[1117]: time="2024-02-08T23:27:45.703092341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:45.703128 env[1117]: time="2024-02-08T23:27:45.703103763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:45.703495 env[1117]: time="2024-02-08T23:27:45.703314666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252 pid=3224 runtime=io.containerd.runc.v2 Feb 8 23:27:45.710005 env[1117]: time="2024-02-08T23:27:45.705889200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:45.710005 env[1117]: time="2024-02-08T23:27:45.705924819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:45.710005 env[1117]: time="2024-02-08T23:27:45.705934448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:45.710005 env[1117]: time="2024-02-08T23:27:45.706096806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80b4f010259055b868a79a09f347a017d06862ba0afa3da002f9a3b5484778ad pid=3242 runtime=io.containerd.runc.v2 Feb 8 23:27:45.715873 systemd[1]: run-containerd-runc-k8s.io-21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252-runc.xw01W1.mount: Deactivated successfully. Feb 8 23:27:45.723890 systemd[1]: Started cri-containerd-21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252.scope. Feb 8 23:27:45.726346 systemd[1]: Started cri-containerd-80b4f010259055b868a79a09f347a017d06862ba0afa3da002f9a3b5484778ad.scope. Feb 8 23:27:45.734138 systemd-resolved[1064]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:27:45.739365 systemd-resolved[1064]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 8 23:27:45.760564 env[1117]: time="2024-02-08T23:27:45.760513517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gjh7c,Uid:c51f6f03-8b36-44ff-b6fb-8783934d03e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252\"" Feb 8 23:27:45.761214 kubelet[1998]: E0208 23:27:45.761187 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:45.763426 env[1117]: time="2024-02-08T23:27:45.763335435Z" level=info msg="CreateContainer within sandbox \"21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:27:45.767637 env[1117]: time="2024-02-08T23:27:45.767602748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v8wf7,Uid:96ff650d-ca04-4530-bc13-0a8fab02a724,Namespace:kube-system,Attempt:0,} returns sandbox id \"80b4f010259055b868a79a09f347a017d06862ba0afa3da002f9a3b5484778ad\"" Feb 8 23:27:45.768425 kubelet[1998]: E0208 23:27:45.768398 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:45.770270 env[1117]: time="2024-02-08T23:27:45.770232851Z" level=info msg="CreateContainer within sandbox \"80b4f010259055b868a79a09f347a017d06862ba0afa3da002f9a3b5484778ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:27:45.795987 env[1117]: time="2024-02-08T23:27:45.795928644Z" level=info msg="CreateContainer within sandbox \"80b4f010259055b868a79a09f347a017d06862ba0afa3da002f9a3b5484778ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a4158a0c9f2ade51a10472123526edcf8ad17c5668154d8f71fa815073f3d3c\"" Feb 8 23:27:45.796580 env[1117]: time="2024-02-08T23:27:45.796411138Z" level=info msg="StartContainer for \"1a4158a0c9f2ade51a10472123526edcf8ad17c5668154d8f71fa815073f3d3c\"" Feb 8 23:27:45.796693 env[1117]: time="2024-02-08T23:27:45.796653090Z" level=info msg="CreateContainer within sandbox \"21979dd83170d611d0f11071fb91f8934635fd835855e7b096ffed344be86252\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b8ed1b7df64bcb0c737c3c201e32ecb5be44c9f6b4ab1359daaf768e46fd5eb\"" Feb 8 23:27:45.798532 env[1117]: time="2024-02-08T23:27:45.798482227Z" level=info msg="StartContainer for \"7b8ed1b7df64bcb0c737c3c201e32ecb5be44c9f6b4ab1359daaf768e46fd5eb\"" Feb 8 23:27:45.821264 systemd[1]: Started cri-containerd-1a4158a0c9f2ade51a10472123526edcf8ad17c5668154d8f71fa815073f3d3c.scope. Feb 8 23:27:45.827389 systemd[1]: Started cri-containerd-7b8ed1b7df64bcb0c737c3c201e32ecb5be44c9f6b4ab1359daaf768e46fd5eb.scope. Feb 8 23:27:45.948029 env[1117]: time="2024-02-08T23:27:45.947953157Z" level=info msg="StartContainer for \"7b8ed1b7df64bcb0c737c3c201e32ecb5be44c9f6b4ab1359daaf768e46fd5eb\" returns successfully" Feb 8 23:27:46.082647 env[1117]: time="2024-02-08T23:27:46.082502800Z" level=info msg="StartContainer for \"1a4158a0c9f2ade51a10472123526edcf8ad17c5668154d8f71fa815073f3d3c\" returns successfully" Feb 8 23:27:46.331417 kubelet[1998]: E0208 23:27:46.331384 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:46.334459 kubelet[1998]: E0208 23:27:46.334379 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:46.427388 kubelet[1998]: I0208 23:27:46.427339 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-gjh7c" podStartSLOduration=26.427299138 pod.CreationTimestamp="2024-02-08 23:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:46.427058457 +0000 UTC m=+40.293067590" watchObservedRunningTime="2024-02-08 23:27:46.427299138 +0000 UTC m=+40.293308250" Feb 8 23:27:46.558570 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:42792.service. Feb 8 23:27:46.624361 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 42792 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:27:46.625367 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:46.629756 kubelet[1998]: I0208 23:27:46.629232 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-v8wf7" podStartSLOduration=26.629181827 pod.CreationTimestamp="2024-02-08 23:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:46.62748911 +0000 UTC m=+40.493498252" watchObservedRunningTime="2024-02-08 23:27:46.629181827 +0000 UTC m=+40.495190949" Feb 8 23:27:46.633829 systemd-logind[1104]: New session 6 of user core. Feb 8 23:27:46.634703 systemd[1]: Started session-6.scope. Feb 8 23:27:46.785055 sshd[3402]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:46.787240 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:42792.service: Deactivated successfully. Feb 8 23:27:46.787998 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:27:46.788579 systemd-logind[1104]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:27:46.789455 systemd-logind[1104]: Removed session 6. Feb 8 23:27:47.336916 kubelet[1998]: E0208 23:27:47.336864 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:47.337333 kubelet[1998]: E0208 23:27:47.337149 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:48.338588 kubelet[1998]: E0208 23:27:48.338555 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:48.339005 kubelet[1998]: E0208 23:27:48.338677 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:27:51.790884 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:36354.service. Feb 8 23:27:51.832717 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:27:51.834059 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:51.837667 systemd-logind[1104]: New session 7 of user core. Feb 8 23:27:51.838662 systemd[1]: Started session-7.scope. Feb 8 23:27:51.959793 sshd[3472]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:51.962839 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:36354.service: Deactivated successfully. Feb 8 23:27:51.963617 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:27:51.964538 systemd-logind[1104]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:27:51.965292 systemd-logind[1104]: Removed session 7. Feb 8 23:27:56.964392 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:36358.service. Feb 8 23:27:57.004367 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:27:57.005454 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:57.009108 systemd-logind[1104]: New session 8 of user core. Feb 8 23:27:57.010046 systemd[1]: Started session-8.scope. Feb 8 23:27:57.114760 sshd[3490]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:57.117072 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:36358.service: Deactivated successfully. Feb 8 23:27:57.117730 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:27:57.118290 systemd-logind[1104]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:27:57.119037 systemd-logind[1104]: Removed session 8. Feb 8 23:28:02.119116 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:55310.service. Feb 8 23:28:02.157622 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:02.158786 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:02.162250 systemd-logind[1104]: New session 9 of user core. Feb 8 23:28:02.163251 systemd[1]: Started session-9.scope. Feb 8 23:28:02.273029 sshd[3504]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:02.275106 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:55310.service: Deactivated successfully. Feb 8 23:28:02.275904 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:28:02.276761 systemd-logind[1104]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:28:02.277417 systemd-logind[1104]: Removed session 9. Feb 8 23:28:07.277225 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:55324.service. Feb 8 23:28:07.314201 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 55324 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:07.315158 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:07.318453 systemd-logind[1104]: New session 10 of user core. Feb 8 23:28:07.319507 systemd[1]: Started session-10.scope. Feb 8 23:28:07.424572 sshd[3521]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:07.426583 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:55324.service: Deactivated successfully. Feb 8 23:28:07.427286 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:28:07.428004 systemd-logind[1104]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:28:07.428671 systemd-logind[1104]: Removed session 10. Feb 8 23:28:12.429123 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:36306.service. Feb 8 23:28:12.469989 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 36306 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:12.471087 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:12.474118 systemd-logind[1104]: New session 11 of user core. Feb 8 23:28:12.474818 systemd[1]: Started session-11.scope. Feb 8 23:28:12.576085 sshd[3535]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:12.578469 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:36306.service: Deactivated successfully. Feb 8 23:28:12.578919 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:28:12.579479 systemd-logind[1104]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:28:12.580250 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:36314.service. Feb 8 23:28:12.580950 systemd-logind[1104]: Removed session 11. Feb 8 23:28:12.616509 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 36314 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:12.617609 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:12.620579 systemd-logind[1104]: New session 12 of user core. Feb 8 23:28:12.621403 systemd[1]: Started session-12.scope. Feb 8 23:28:13.398026 sshd[3549]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:13.401577 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:36314.service: Deactivated successfully. Feb 8 23:28:13.402295 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:28:13.403756 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:36316.service. Feb 8 23:28:13.404013 systemd-logind[1104]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:28:13.406355 systemd-logind[1104]: Removed session 12. Feb 8 23:28:13.443922 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 36316 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:13.444873 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:13.447739 systemd-logind[1104]: New session 13 of user core. Feb 8 23:28:13.448762 systemd[1]: Started session-13.scope. Feb 8 23:28:13.547135 sshd[3560]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:13.548937 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:36316.service: Deactivated successfully. Feb 8 23:28:13.549593 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:28:13.550129 systemd-logind[1104]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:28:13.550729 systemd-logind[1104]: Removed session 13. Feb 8 23:28:18.551910 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:40408.service. Feb 8 23:28:18.590102 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 40408 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:18.591152 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:18.594341 systemd-logind[1104]: New session 14 of user core. Feb 8 23:28:18.595149 systemd[1]: Started session-14.scope. Feb 8 23:28:18.701803 sshd[3573]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:18.704034 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:40408.service: Deactivated successfully. Feb 8 23:28:18.704715 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:28:18.705279 systemd-logind[1104]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:28:18.705963 systemd-logind[1104]: Removed session 14. Feb 8 23:28:19.242209 kubelet[1998]: E0208 23:28:19.242173 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:23.705932 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:40422.service. Feb 8 23:28:23.741779 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:23.742840 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:23.746136 systemd-logind[1104]: New session 15 of user core. Feb 8 23:28:23.747115 systemd[1]: Started session-15.scope. Feb 8 23:28:23.848511 sshd[3589]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:23.851351 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:40422.service: Deactivated successfully. Feb 8 23:28:23.852046 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:28:23.852594 systemd-logind[1104]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:28:23.853677 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:40430.service. Feb 8 23:28:23.854831 systemd-logind[1104]: Removed session 15. Feb 8 23:28:23.889278 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 40430 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:23.890328 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:23.893383 systemd-logind[1104]: New session 16 of user core. Feb 8 23:28:23.894160 systemd[1]: Started session-16.scope. Feb 8 23:28:24.195561 sshd[3602]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:24.198191 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:40430.service: Deactivated successfully. Feb 8 23:28:24.198673 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:28:24.199274 systemd-logind[1104]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:28:24.200366 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:40436.service. Feb 8 23:28:24.201140 systemd-logind[1104]: Removed session 16. Feb 8 23:28:24.237669 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 40436 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:24.238641 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:24.241919 systemd-logind[1104]: New session 17 of user core. Feb 8 23:28:24.243071 systemd[1]: Started session-17.scope. Feb 8 23:28:25.163612 sshd[3614]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:25.167111 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:40448.service. Feb 8 23:28:25.167523 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:40436.service: Deactivated successfully. Feb 8 23:28:25.173610 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:28:25.174480 systemd-logind[1104]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:28:25.175556 systemd-logind[1104]: Removed session 17. Feb 8 23:28:25.206210 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 40448 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:25.207662 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:25.212164 systemd[1]: Started session-18.scope. Feb 8 23:28:25.213327 systemd-logind[1104]: New session 18 of user core. Feb 8 23:28:25.450759 sshd[3642]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:25.453604 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:40448.service: Deactivated successfully. Feb 8 23:28:25.454097 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:28:25.454728 systemd-logind[1104]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:28:25.455589 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:40456.service. Feb 8 23:28:25.456436 systemd-logind[1104]: Removed session 18. Feb 8 23:28:25.491082 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:25.492032 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:25.495288 systemd-logind[1104]: New session 19 of user core. Feb 8 23:28:25.496221 systemd[1]: Started session-19.scope. Feb 8 23:28:25.600907 sshd[3694]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:25.602897 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:40456.service: Deactivated successfully. Feb 8 23:28:25.603567 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:28:25.604001 systemd-logind[1104]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:28:25.604611 systemd-logind[1104]: Removed session 19. Feb 8 23:28:30.605877 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:40124.service. Feb 8 23:28:30.643285 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 40124 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:30.644229 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:30.647766 systemd-logind[1104]: New session 20 of user core. Feb 8 23:28:30.648511 systemd[1]: Started session-20.scope. Feb 8 23:28:30.747435 sshd[3707]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:30.749494 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:40124.service: Deactivated successfully. Feb 8 23:28:30.750171 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:28:30.750790 systemd-logind[1104]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:28:30.751547 systemd-logind[1104]: Removed session 20. Feb 8 23:28:35.751424 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:40138.service. Feb 8 23:28:35.786689 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 40138 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:35.787677 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:35.790927 systemd-logind[1104]: New session 21 of user core. Feb 8 23:28:35.792029 systemd[1]: Started session-21.scope. Feb 8 23:28:35.892420 sshd[3747]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:35.894360 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:40138.service: Deactivated successfully. Feb 8 23:28:35.894994 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:28:35.895547 systemd-logind[1104]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:28:35.896278 systemd-logind[1104]: Removed session 21. Feb 8 23:28:40.896411 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:43382.service. Feb 8 23:28:40.932047 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 43382 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:40.933256 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:40.936781 systemd-logind[1104]: New session 22 of user core. Feb 8 23:28:40.937598 systemd[1]: Started session-22.scope. Feb 8 23:28:41.038397 sshd[3761]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:41.040818 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:43382.service: Deactivated successfully. Feb 8 23:28:41.041518 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:28:41.042207 systemd-logind[1104]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:28:41.042841 systemd-logind[1104]: Removed session 22. Feb 8 23:28:42.242049 kubelet[1998]: E0208 23:28:42.242015 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:43.241763 kubelet[1998]: E0208 23:28:43.241718 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:43.241942 kubelet[1998]: E0208 23:28:43.241817 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:46.042560 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:43392.service. Feb 8 23:28:46.078928 sshd[3774]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:46.080340 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:46.084410 systemd-logind[1104]: New session 23 of user core. Feb 8 23:28:46.085499 systemd[1]: Started session-23.scope. Feb 8 23:28:46.194567 sshd[3774]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:46.196956 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:43392.service: Deactivated successfully. Feb 8 23:28:46.197732 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:28:46.198654 systemd-logind[1104]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:28:46.199416 systemd-logind[1104]: Removed session 23. Feb 8 23:28:46.242224 kubelet[1998]: E0208 23:28:46.242184 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:49.242439 kubelet[1998]: E0208 23:28:49.242398 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:51.198734 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:39110.service. Feb 8 23:28:51.233725 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 39110 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:51.234871 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:51.238313 systemd-logind[1104]: New session 24 of user core. Feb 8 23:28:51.239330 systemd[1]: Started session-24.scope. Feb 8 23:28:51.340140 sshd[3787]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:51.345029 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:39112.service. Feb 8 23:28:51.346478 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:39110.service: Deactivated successfully. Feb 8 23:28:51.347043 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:28:51.347583 systemd-logind[1104]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:28:51.348312 systemd-logind[1104]: Removed session 24. Feb 8 23:28:51.381495 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:51.382826 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:51.386417 systemd-logind[1104]: New session 25 of user core. Feb 8 23:28:51.387517 systemd[1]: Started session-25.scope. Feb 8 23:28:52.835351 env[1117]: time="2024-02-08T23:28:52.835303208Z" level=info msg="StopContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" with timeout 30 (s)" Feb 8 23:28:52.835767 env[1117]: time="2024-02-08T23:28:52.835717327Z" level=info msg="Stop container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" with signal terminated" Feb 8 23:28:52.851559 systemd[1]: cri-containerd-d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6.scope: Deactivated successfully. Feb 8 23:28:52.861341 env[1117]: time="2024-02-08T23:28:52.861280531Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:28:52.866324 env[1117]: time="2024-02-08T23:28:52.866291949Z" level=info msg="StopContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" with timeout 1 (s)" Feb 8 23:28:52.866883 env[1117]: time="2024-02-08T23:28:52.866837058Z" level=info msg="Stop container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" with signal terminated" Feb 8 23:28:52.872891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6-rootfs.mount: Deactivated successfully. Feb 8 23:28:52.874334 systemd-networkd[1019]: lxc_health: Link DOWN Feb 8 23:28:52.874342 systemd-networkd[1019]: lxc_health: Lost carrier Feb 8 23:28:52.909401 systemd[1]: cri-containerd-8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397.scope: Deactivated successfully. Feb 8 23:28:52.909726 systemd[1]: cri-containerd-8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397.scope: Consumed 6.161s CPU time. Feb 8 23:28:52.926717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397-rootfs.mount: Deactivated successfully. Feb 8 23:28:53.061507 env[1117]: time="2024-02-08T23:28:53.061456749Z" level=info msg="shim disconnected" id=d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6 Feb 8 23:28:53.061837 env[1117]: time="2024-02-08T23:28:53.061813780Z" level=warning msg="cleaning up after shim disconnected" id=d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6 namespace=k8s.io Feb 8 23:28:53.061837 env[1117]: time="2024-02-08T23:28:53.061831002Z" level=info msg="cleaning up dead shim" Feb 8 23:28:53.061935 env[1117]: time="2024-02-08T23:28:53.061463212Z" level=info msg="shim disconnected" id=8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397 Feb 8 23:28:53.061964 env[1117]: time="2024-02-08T23:28:53.061921585Z" level=warning msg="cleaning up after shim disconnected" id=8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397 namespace=k8s.io Feb 8 23:28:53.061964 env[1117]: time="2024-02-08T23:28:53.061948146Z" level=info msg="cleaning up dead shim" Feb 8 23:28:53.068238 env[1117]: time="2024-02-08T23:28:53.068201643Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3875 runtime=io.containerd.runc.v2\n" Feb 8 23:28:53.068660 env[1117]: time="2024-02-08T23:28:53.068607065Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\n" Feb 8 23:28:53.194387 env[1117]: time="2024-02-08T23:28:53.194346551Z" level=info msg="StopContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" returns successfully" Feb 8 23:28:53.194532 env[1117]: time="2024-02-08T23:28:53.194447473Z" level=info msg="StopContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" returns successfully" Feb 8 23:28:53.195178 env[1117]: time="2024-02-08T23:28:53.195144511Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:28:53.195228 env[1117]: time="2024-02-08T23:28:53.195208984Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:28:53.195258 env[1117]: time="2024-02-08T23:28:53.195230966Z" level=info msg="Container to stop \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.195284 env[1117]: time="2024-02-08T23:28:53.195252508Z" level=info msg="Container to stop \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.195284 env[1117]: time="2024-02-08T23:28:53.195261995Z" level=info msg="Container to stop \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.195284 env[1117]: time="2024-02-08T23:28:53.195269610Z" level=info msg="Container to stop \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.195376 env[1117]: time="2024-02-08T23:28:53.195285921Z" level=info msg="Container to stop \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.195376 env[1117]: time="2024-02-08T23:28:53.195300058Z" level=info msg="Container to stop \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:53.196773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b-shm.mount: Deactivated successfully. Feb 8 23:28:53.196875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c-shm.mount: Deactivated successfully. Feb 8 23:28:53.201115 systemd[1]: cri-containerd-80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c.scope: Deactivated successfully. Feb 8 23:28:53.202823 systemd[1]: cri-containerd-d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b.scope: Deactivated successfully. Feb 8 23:28:53.424697 env[1117]: time="2024-02-08T23:28:53.424613852Z" level=info msg="shim disconnected" id=d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b Feb 8 23:28:53.424697 env[1117]: time="2024-02-08T23:28:53.424674396Z" level=warning msg="cleaning up after shim disconnected" id=d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b namespace=k8s.io Feb 8 23:28:53.424697 env[1117]: time="2024-02-08T23:28:53.424687181Z" level=info msg="cleaning up dead shim" Feb 8 23:28:53.424996 env[1117]: time="2024-02-08T23:28:53.424923702Z" level=info msg="shim disconnected" id=80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c Feb 8 23:28:53.424996 env[1117]: time="2024-02-08T23:28:53.424987263Z" level=warning msg="cleaning up after shim disconnected" id=80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c namespace=k8s.io Feb 8 23:28:53.425068 env[1117]: time="2024-02-08T23:28:53.424999145Z" level=info msg="cleaning up dead shim" Feb 8 23:28:53.431917 env[1117]: time="2024-02-08T23:28:53.431872925Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3939 runtime=io.containerd.runc.v2\n" Feb 8 23:28:53.432225 env[1117]: time="2024-02-08T23:28:53.432197713Z" level=info msg="TearDown network for sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" successfully" Feb 8 23:28:53.432284 env[1117]: time="2024-02-08T23:28:53.432194207Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3934 runtime=io.containerd.runc.v2\n" Feb 8 23:28:53.432336 env[1117]: time="2024-02-08T23:28:53.432221008Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" returns successfully" Feb 8 23:28:53.432618 env[1117]: time="2024-02-08T23:28:53.432579782Z" level=info msg="TearDown network for sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" successfully" Feb 8 23:28:53.432667 env[1117]: time="2024-02-08T23:28:53.432617553Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" returns successfully" Feb 8 23:28:53.447418 kubelet[1998]: I0208 23:28:53.447306 1998 scope.go:115] "RemoveContainer" containerID="d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6" Feb 8 23:28:53.448881 env[1117]: time="2024-02-08T23:28:53.448808659Z" level=info msg="RemoveContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\"" Feb 8 23:28:53.543826 env[1117]: time="2024-02-08T23:28:53.543762227Z" level=info msg="RemoveContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" returns successfully" Feb 8 23:28:53.544146 kubelet[1998]: I0208 23:28:53.544103 1998 scope.go:115] "RemoveContainer" containerID="d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6" Feb 8 23:28:53.544461 env[1117]: time="2024-02-08T23:28:53.544364696Z" level=error msg="ContainerStatus for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": not found" Feb 8 23:28:53.544582 kubelet[1998]: E0208 23:28:53.544564 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": not found" containerID="d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6" Feb 8 23:28:53.544676 kubelet[1998]: I0208 23:28:53.544595 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6} err="failed to get container status \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": not found" Feb 8 23:28:53.544676 kubelet[1998]: I0208 23:28:53.544605 1998 scope.go:115] "RemoveContainer" containerID="8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397" Feb 8 23:28:53.545957 env[1117]: time="2024-02-08T23:28:53.545906323Z" level=info msg="RemoveContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\"" Feb 8 23:28:53.595810 env[1117]: time="2024-02-08T23:28:53.595750760Z" level=info msg="RemoveContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" returns successfully" Feb 8 23:28:53.596215 kubelet[1998]: I0208 23:28:53.596100 1998 scope.go:115] "RemoveContainer" containerID="ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887" Feb 8 23:28:53.597473 env[1117]: time="2024-02-08T23:28:53.597432674Z" level=info msg="RemoveContainer for \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\"" Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607592 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hostproc\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607644 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-bpf-maps\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607683 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmh68\" (UniqueName: \"kubernetes.io/projected/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-kube-api-access-bmh68\") pod \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\" (UID: \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\") " Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607712 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-xtables-lock\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607739 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-etc-cni-netd\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607787 kubelet[1998]: I0208 23:28:53.607745 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hostproc" (OuterVolumeSpecName: "hostproc") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607770 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-cilium-config-path\") pod \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\" (UID: \"22ca0bd5-1fbc-4eeb-b899-2047e1189cc8\") " Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607842 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-kernel\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607870 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-config-path\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607891 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hubble-tls\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607908 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-clustermesh-secrets\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.607958 kubelet[1998]: I0208 23:28:53.607926 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-run\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.607945 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-lib-modules\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.607961 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cni-path\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.607994 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjxsp\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-kube-api-access-tjxsp\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.608011 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-cgroup\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.608028 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-net\") pod \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\" (UID: \"d33cec7f-32e9-4b1c-8d4f-66de69b84b76\") " Feb 8 23:28:53.608132 kubelet[1998]: W0208 23:28:53.608028 1998 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:28:53.608132 kubelet[1998]: I0208 23:28:53.608072 1998 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.608279 kubelet[1998]: I0208 23:28:53.608091 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.608279 kubelet[1998]: I0208 23:28:53.608108 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.608330 kubelet[1998]: W0208 23:28:53.608282 1998 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d33cec7f-32e9-4b1c-8d4f-66de69b84b76/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:28:53.610521 kubelet[1998]: I0208 23:28:53.610495 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:28:53.611059 kubelet[1998]: I0208 23:28:53.610949 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611169 kubelet[1998]: I0208 23:28:53.611132 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22ca0bd5-1fbc-4eeb-b899-2047e1189cc8" (UID: "22ca0bd5-1fbc-4eeb-b899-2047e1189cc8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:28:53.611169 kubelet[1998]: I0208 23:28:53.611167 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611391 kubelet[1998]: I0208 23:28:53.611188 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611391 kubelet[1998]: I0208 23:28:53.611195 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611391 kubelet[1998]: I0208 23:28:53.611216 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611391 kubelet[1998]: I0208 23:28:53.611246 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cni-path" (OuterVolumeSpecName: "cni-path") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.611391 kubelet[1998]: I0208 23:28:53.611270 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:53.613723 kubelet[1998]: I0208 23:28:53.613695 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:28:53.614931 kubelet[1998]: I0208 23:28:53.614906 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-kube-api-access-tjxsp" (OuterVolumeSpecName: "kube-api-access-tjxsp") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "kube-api-access-tjxsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:28:53.615146 kubelet[1998]: I0208 23:28:53.615114 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-kube-api-access-bmh68" (OuterVolumeSpecName: "kube-api-access-bmh68") pod "22ca0bd5-1fbc-4eeb-b899-2047e1189cc8" (UID: "22ca0bd5-1fbc-4eeb-b899-2047e1189cc8"). InnerVolumeSpecName "kube-api-access-bmh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:28:53.615229 kubelet[1998]: I0208 23:28:53.615195 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d33cec7f-32e9-4b1c-8d4f-66de69b84b76" (UID: "d33cec7f-32e9-4b1c-8d4f-66de69b84b76"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:28:53.636587 env[1117]: time="2024-02-08T23:28:53.636531513Z" level=info msg="RemoveContainer for \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\" returns successfully" Feb 8 23:28:53.636894 kubelet[1998]: I0208 23:28:53.636867 1998 scope.go:115] "RemoveContainer" containerID="3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95" Feb 8 23:28:53.638269 env[1117]: time="2024-02-08T23:28:53.638230321Z" level=info msg="RemoveContainer for \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\"" Feb 8 23:28:53.688986 env[1117]: time="2024-02-08T23:28:53.688908346Z" level=info msg="RemoveContainer for \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\" returns successfully" Feb 8 23:28:53.689221 kubelet[1998]: I0208 23:28:53.689199 1998 scope.go:115] "RemoveContainer" containerID="aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60" Feb 8 23:28:53.690284 env[1117]: time="2024-02-08T23:28:53.690251525Z" level=info msg="RemoveContainer for \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708617 1998 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708656 1998 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708667 1998 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-bmh68\" (UniqueName: \"kubernetes.io/projected/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-kube-api-access-bmh68\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708677 1998 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708686 1998 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708694 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708705 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.708722 kubelet[1998]: I0208 23:28:53.708713 1998 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708722 1998 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708730 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708738 1998 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708747 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708755 1998 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708762 1998 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.709073 kubelet[1998]: I0208 23:28:53.708771 1998 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tjxsp\" (UniqueName: \"kubernetes.io/projected/d33cec7f-32e9-4b1c-8d4f-66de69b84b76-kube-api-access-tjxsp\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:53.751333 systemd[1]: Removed slice kubepods-besteffort-pod22ca0bd5_1fbc_4eeb_b899_2047e1189cc8.slice. Feb 8 23:28:53.754580 systemd[1]: Removed slice kubepods-burstable-podd33cec7f_32e9_4b1c_8d4f_66de69b84b76.slice. Feb 8 23:28:53.754669 systemd[1]: kubepods-burstable-podd33cec7f_32e9_4b1c_8d4f_66de69b84b76.slice: Consumed 6.244s CPU time. Feb 8 23:28:53.794794 env[1117]: time="2024-02-08T23:28:53.794736187Z" level=info msg="RemoveContainer for \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\" returns successfully" Feb 8 23:28:53.795100 kubelet[1998]: I0208 23:28:53.795073 1998 scope.go:115] "RemoveContainer" containerID="efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12" Feb 8 23:28:53.797114 env[1117]: time="2024-02-08T23:28:53.796343641Z" level=info msg="RemoveContainer for \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\"" Feb 8 23:28:53.802907 env[1117]: time="2024-02-08T23:28:53.802855150Z" level=info msg="RemoveContainer for \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\" returns successfully" Feb 8 23:28:53.803417 kubelet[1998]: I0208 23:28:53.803361 1998 scope.go:115] "RemoveContainer" containerID="8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397" Feb 8 23:28:53.803774 env[1117]: time="2024-02-08T23:28:53.803695370Z" level=error msg="ContainerStatus for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": not found" Feb 8 23:28:53.803881 kubelet[1998]: E0208 23:28:53.803848 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": not found" containerID="8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397" Feb 8 23:28:53.803948 kubelet[1998]: I0208 23:28:53.803887 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397} err="failed to get container status \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": rpc error: code = NotFound desc = an error occurred when try to find container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": not found" Feb 8 23:28:53.803948 kubelet[1998]: I0208 23:28:53.803897 1998 scope.go:115] "RemoveContainer" containerID="ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887" Feb 8 23:28:53.804350 env[1117]: time="2024-02-08T23:28:53.804255969Z" level=error msg="ContainerStatus for \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\": not found" Feb 8 23:28:53.804528 kubelet[1998]: E0208 23:28:53.804502 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\": not found" containerID="ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887" Feb 8 23:28:53.804574 kubelet[1998]: I0208 23:28:53.804540 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887} err="failed to get container status \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab029f37aed0ae29669fe08bfa1119d5b58eac1c34a9a60c96b6197f8fb8a887\": not found" Feb 8 23:28:53.804574 kubelet[1998]: I0208 23:28:53.804554 1998 scope.go:115] "RemoveContainer" containerID="3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95" Feb 8 23:28:53.804801 env[1117]: time="2024-02-08T23:28:53.804733839Z" level=error msg="ContainerStatus for \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\": not found" Feb 8 23:28:53.804914 kubelet[1998]: E0208 23:28:53.804889 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\": not found" containerID="3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95" Feb 8 23:28:53.804989 kubelet[1998]: I0208 23:28:53.804925 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95} err="failed to get container status \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ed2ead2b0e5ae8f986795e0aa2de9fe531fc40c022401ec87c361c7d5acab95\": not found" Feb 8 23:28:53.804989 kubelet[1998]: I0208 23:28:53.804938 1998 scope.go:115] "RemoveContainer" containerID="aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60" Feb 8 23:28:53.805263 env[1117]: time="2024-02-08T23:28:53.805187303Z" level=error msg="ContainerStatus for \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\": not found" Feb 8 23:28:53.805424 kubelet[1998]: E0208 23:28:53.805343 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\": not found" containerID="aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60" Feb 8 23:28:53.805424 kubelet[1998]: I0208 23:28:53.805372 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60} err="failed to get container status \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\": rpc error: code = NotFound desc = an error occurred when try to find container \"aea37a5abf5aff498a709aa33696f6a58e198282596c44d1083f68f52bda0b60\": not found" Feb 8 23:28:53.805424 kubelet[1998]: I0208 23:28:53.805383 1998 scope.go:115] "RemoveContainer" containerID="efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12" Feb 8 23:28:53.805609 env[1117]: time="2024-02-08T23:28:53.805546528Z" level=error msg="ContainerStatus for \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\": not found" Feb 8 23:28:53.805825 kubelet[1998]: E0208 23:28:53.805737 1998 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\": not found" containerID="efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12" Feb 8 23:28:53.805825 kubelet[1998]: I0208 23:28:53.805766 1998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12} err="failed to get container status \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\": rpc error: code = NotFound desc = an error occurred when try to find container \"efd37fc1d35c6bec61cbdd70c69f631b5f56e3f167ebdd65a4adde26da250e12\": not found" Feb 8 23:28:53.842741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b-rootfs.mount: Deactivated successfully. Feb 8 23:28:53.842851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c-rootfs.mount: Deactivated successfully. Feb 8 23:28:53.842917 systemd[1]: var-lib-kubelet-pods-22ca0bd5\x2d1fbc\x2d4eeb\x2db899\x2d2047e1189cc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmh68.mount: Deactivated successfully. Feb 8 23:28:53.842996 systemd[1]: var-lib-kubelet-pods-d33cec7f\x2d32e9\x2d4b1c\x2d8d4f\x2d66de69b84b76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjxsp.mount: Deactivated successfully. Feb 8 23:28:53.843063 systemd[1]: var-lib-kubelet-pods-d33cec7f\x2d32e9\x2d4b1c\x2d8d4f\x2d66de69b84b76-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:28:53.843144 systemd[1]: var-lib-kubelet-pods-d33cec7f\x2d32e9\x2d4b1c\x2d8d4f\x2d66de69b84b76-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:28:54.242370 env[1117]: time="2024-02-08T23:28:54.242322449Z" level=info msg="StopContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" with timeout 1 (s)" Feb 8 23:28:54.242735 env[1117]: time="2024-02-08T23:28:54.242366494Z" level=error msg="StopContainer for \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": not found" Feb 8 23:28:54.242735 env[1117]: time="2024-02-08T23:28:54.242575421Z" level=info msg="StopContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" with timeout 1 (s)" Feb 8 23:28:54.242735 env[1117]: time="2024-02-08T23:28:54.242595600Z" level=error msg="StopContainer for \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": not found" Feb 8 23:28:54.242807 kubelet[1998]: E0208 23:28:54.242655 1998 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6\": not found" containerID="d8af7c04d9254d579519fdb80e010dd533f1e860161a2bafb7fa9d08daa6f9a6" Feb 8 23:28:54.242933 env[1117]: time="2024-02-08T23:28:54.242910621Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:28:54.243025 env[1117]: time="2024-02-08T23:28:54.242992637Z" level=info msg="TearDown network for sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" successfully" Feb 8 23:28:54.243056 env[1117]: time="2024-02-08T23:28:54.243022233Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" returns successfully" Feb 8 23:28:54.243175 kubelet[1998]: E0208 23:28:54.243147 1998 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397\": not found" containerID="8725c9c5e27da4727c77babe1d19bce1bfaa9fd5fcb4bf202edfff8c5c0e4397" Feb 8 23:28:54.243326 env[1117]: time="2024-02-08T23:28:54.243291587Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:28:54.243506 env[1117]: time="2024-02-08T23:28:54.243340861Z" level=info msg="TearDown network for sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" successfully" Feb 8 23:28:54.243506 env[1117]: time="2024-02-08T23:28:54.243359697Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" returns successfully" Feb 8 23:28:54.243591 kubelet[1998]: I0208 23:28:54.243547 1998 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=22ca0bd5-1fbc-4eeb-b899-2047e1189cc8 path="/var/lib/kubelet/pods/22ca0bd5-1fbc-4eeb-b899-2047e1189cc8/volumes" Feb 8 23:28:54.243916 kubelet[1998]: I0208 23:28:54.243899 1998 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d33cec7f-32e9-4b1c-8d4f-66de69b84b76 path="/var/lib/kubelet/pods/d33cec7f-32e9-4b1c-8d4f-66de69b84b76/volumes" Feb 8 23:28:54.773259 sshd[3799]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:54.775591 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:39112.service: Deactivated successfully. Feb 8 23:28:54.776052 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:28:54.776493 systemd-logind[1104]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:28:54.777579 systemd[1]: Started sshd@25-10.0.0.113:22-10.0.0.1:39126.service. Feb 8 23:28:54.778603 systemd-logind[1104]: Removed session 25. Feb 8 23:28:54.814701 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 39126 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:54.815592 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:54.818677 systemd-logind[1104]: New session 26 of user core. Feb 8 23:28:54.819636 systemd[1]: Started session-26.scope. Feb 8 23:28:55.597733 sshd[3965]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:55.600442 systemd[1]: sshd@25-10.0.0.113:22-10.0.0.1:39126.service: Deactivated successfully. Feb 8 23:28:55.600999 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:28:55.601520 systemd-logind[1104]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:28:55.602645 systemd[1]: Started sshd@26-10.0.0.113:22-10.0.0.1:39130.service. Feb 8 23:28:55.603534 systemd-logind[1104]: Removed session 26. Feb 8 23:28:55.638790 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 39130 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:55.640068 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:55.643503 systemd-logind[1104]: New session 27 of user core. Feb 8 23:28:55.644364 systemd[1]: Started session-27.scope. Feb 8 23:28:55.671187 kubelet[1998]: I0208 23:28:55.671124 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:55.671187 kubelet[1998]: E0208 23:28:55.671195 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22ca0bd5-1fbc-4eeb-b899-2047e1189cc8" containerName="cilium-operator" Feb 8 23:28:55.671614 kubelet[1998]: E0208 23:28:55.671209 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="mount-cgroup" Feb 8 23:28:55.671614 kubelet[1998]: E0208 23:28:55.671219 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="apply-sysctl-overwrites" Feb 8 23:28:55.671614 kubelet[1998]: E0208 23:28:55.671226 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="mount-bpf-fs" Feb 8 23:28:55.671614 kubelet[1998]: E0208 23:28:55.671232 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="clean-cilium-state" Feb 8 23:28:55.671614 kubelet[1998]: E0208 23:28:55.671239 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="cilium-agent" Feb 8 23:28:55.671614 kubelet[1998]: I0208 23:28:55.671262 1998 memory_manager.go:346] "RemoveStaleState removing state" podUID="22ca0bd5-1fbc-4eeb-b899-2047e1189cc8" containerName="cilium-operator" Feb 8 23:28:55.671614 kubelet[1998]: I0208 23:28:55.671268 1998 memory_manager.go:346] "RemoveStaleState removing state" podUID="d33cec7f-32e9-4b1c-8d4f-66de69b84b76" containerName="cilium-agent" Feb 8 23:28:55.676298 systemd[1]: Created slice kubepods-burstable-pod3ecaeaeb_3970_4e0c_9521_2543b554a82e.slice. Feb 8 23:28:55.821317 kubelet[1998]: I0208 23:28:55.821254 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-kernel\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821317 kubelet[1998]: I0208 23:28:55.821320 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-xtables-lock\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821536 kubelet[1998]: I0208 23:28:55.821387 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrbr\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-kube-api-access-5xrbr\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821536 kubelet[1998]: I0208 23:28:55.821438 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cni-path\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821536 kubelet[1998]: I0208 23:28:55.821464 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-run\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821536 kubelet[1998]: I0208 23:28:55.821488 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-config-path\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821536 kubelet[1998]: I0208 23:28:55.821507 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-lib-modules\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821646 kubelet[1998]: I0208 23:28:55.821585 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-ipsec-secrets\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821646 kubelet[1998]: I0208 23:28:55.821625 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-net\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821696 kubelet[1998]: I0208 23:28:55.821656 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-bpf-maps\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821721 kubelet[1998]: I0208 23:28:55.821694 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-etc-cni-netd\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821746 kubelet[1998]: I0208 23:28:55.821729 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hostproc\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821769 kubelet[1998]: I0208 23:28:55.821755 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hubble-tls\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821792 kubelet[1998]: I0208 23:28:55.821782 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-cgroup\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.821842 kubelet[1998]: I0208 23:28:55.821816 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-clustermesh-secrets\") pod \"cilium-2w786\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " pod="kube-system/cilium-2w786" Feb 8 23:28:55.871423 sshd[3977]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:55.874645 systemd[1]: sshd@26-10.0.0.113:22-10.0.0.1:39130.service: Deactivated successfully. Feb 8 23:28:55.875288 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:28:55.875782 systemd-logind[1104]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:28:55.876810 systemd[1]: Started sshd@27-10.0.0.113:22-10.0.0.1:39132.service. Feb 8 23:28:55.877642 systemd-logind[1104]: Removed session 27. Feb 8 23:28:55.918160 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 39132 ssh2: RSA SHA256:ZIzHIduQp2k+ZJQKyG+d10ckdlQJVNUpLoHdM3Iys8s Feb 8 23:28:55.919321 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:55.924406 systemd[1]: Started session-28.scope. Feb 8 23:28:55.925031 systemd-logind[1104]: New session 28 of user core. Feb 8 23:28:56.270657 kubelet[1998]: E0208 23:28:56.270627 1998 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:28:56.278641 kubelet[1998]: E0208 23:28:56.278606 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:56.279185 env[1117]: time="2024-02-08T23:28:56.279137876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2w786,Uid:3ecaeaeb-3970-4e0c-9521-2543b554a82e,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:56.485285 env[1117]: time="2024-02-08T23:28:56.485211655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:56.485462 env[1117]: time="2024-02-08T23:28:56.485250890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:56.485462 env[1117]: time="2024-02-08T23:28:56.485263986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:56.485576 env[1117]: time="2024-02-08T23:28:56.485505846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf pid=4012 runtime=io.containerd.runc.v2 Feb 8 23:28:56.498821 systemd[1]: Started cri-containerd-e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf.scope. Feb 8 23:28:56.521023 env[1117]: time="2024-02-08T23:28:56.520889287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2w786,Uid:3ecaeaeb-3970-4e0c-9521-2543b554a82e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\"" Feb 8 23:28:56.521836 kubelet[1998]: E0208 23:28:56.521803 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:56.523791 env[1117]: time="2024-02-08T23:28:56.523761885Z" level=info msg="CreateContainer within sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:28:56.537230 env[1117]: time="2024-02-08T23:28:56.537159481Z" level=info msg="CreateContainer within sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\"" Feb 8 23:28:56.537869 env[1117]: time="2024-02-08T23:28:56.537808299Z" level=info msg="StartContainer for \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\"" Feb 8 23:28:56.553964 systemd[1]: Started cri-containerd-bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa.scope. Feb 8 23:28:56.563916 systemd[1]: cri-containerd-bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa.scope: Deactivated successfully. Feb 8 23:28:56.564283 systemd[1]: Stopped cri-containerd-bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa.scope. Feb 8 23:28:56.578631 env[1117]: time="2024-02-08T23:28:56.578561318Z" level=info msg="shim disconnected" id=bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa Feb 8 23:28:56.578631 env[1117]: time="2024-02-08T23:28:56.578624850Z" level=warning msg="cleaning up after shim disconnected" id=bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa namespace=k8s.io Feb 8 23:28:56.578631 env[1117]: time="2024-02-08T23:28:56.578638525Z" level=info msg="cleaning up dead shim" Feb 8 23:28:56.586694 env[1117]: time="2024-02-08T23:28:56.586641786Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:28:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:28:56.587014 env[1117]: time="2024-02-08T23:28:56.586898286Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Feb 8 23:28:56.587305 env[1117]: time="2024-02-08T23:28:56.587257641Z" level=error msg="Failed to pipe stdout of container \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\"" error="reading from a closed fifo" Feb 8 23:28:56.587491 env[1117]: time="2024-02-08T23:28:56.587434678Z" level=error msg="Failed to pipe stderr of container \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\"" error="reading from a closed fifo" Feb 8 23:28:56.589887 env[1117]: time="2024-02-08T23:28:56.589838101Z" level=error msg="StartContainer for \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:28:56.590093 kubelet[1998]: E0208 23:28:56.590072 1998 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa" Feb 8 23:28:56.590211 kubelet[1998]: E0208 23:28:56.590199 1998 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:28:56.590211 kubelet[1998]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:28:56.590211 kubelet[1998]: rm /hostbin/cilium-mount Feb 8 23:28:56.590211 kubelet[1998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xrbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-2w786_kube-system(3ecaeaeb-3970-4e0c-9521-2543b554a82e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:28:56.590363 kubelet[1998]: E0208 23:28:56.590241 1998 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2w786" podUID=3ecaeaeb-3970-4e0c-9521-2543b554a82e Feb 8 23:28:57.460699 env[1117]: time="2024-02-08T23:28:57.460655788Z" level=info msg="StopPodSandbox for \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\"" Feb 8 23:28:57.461172 env[1117]: time="2024-02-08T23:28:57.460745388Z" level=info msg="Container to stop \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:28:57.462291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf-shm.mount: Deactivated successfully. Feb 8 23:28:57.467357 systemd[1]: cri-containerd-e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf.scope: Deactivated successfully. Feb 8 23:28:57.484826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf-rootfs.mount: Deactivated successfully. Feb 8 23:28:57.604360 env[1117]: time="2024-02-08T23:28:57.604303414Z" level=info msg="shim disconnected" id=e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf Feb 8 23:28:57.604360 env[1117]: time="2024-02-08T23:28:57.604357797Z" level=warning msg="cleaning up after shim disconnected" id=e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf namespace=k8s.io Feb 8 23:28:57.604360 env[1117]: time="2024-02-08T23:28:57.604368077Z" level=info msg="cleaning up dead shim" Feb 8 23:28:57.610709 env[1117]: time="2024-02-08T23:28:57.610671749Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\n" Feb 8 23:28:57.611008 env[1117]: time="2024-02-08T23:28:57.610965059Z" level=info msg="TearDown network for sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" successfully" Feb 8 23:28:57.611060 env[1117]: time="2024-02-08T23:28:57.611007500Z" level=info msg="StopPodSandbox for \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" returns successfully" Feb 8 23:28:57.633259 kubelet[1998]: I0208 23:28:57.633204 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-ipsec-secrets\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633259 kubelet[1998]: I0208 23:28:57.633257 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-config-path\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633281 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-bpf-maps\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633304 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hostproc\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633334 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrbr\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-kube-api-access-5xrbr\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633336 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633356 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-lib-modules\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633708 kubelet[1998]: I0208 23:28:57.633385 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633392 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-clustermesh-secrets\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633407 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633422 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cni-path\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633445 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-etc-cni-netd\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633469 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-net\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.633911 kubelet[1998]: I0208 23:28:57.633493 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-kernel\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.633520 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hubble-tls\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.633560 1998 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.633575 1998 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.633585 1998 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.634110 kubelet[1998]: W0208 23:28:57.634006 1998 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ecaeaeb-3970-4e0c-9521-2543b554a82e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.634025 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.634110 kubelet[1998]: I0208 23:28:57.634065 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.634326 kubelet[1998]: I0208 23:28:57.634080 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.634326 kubelet[1998]: I0208 23:28:57.634081 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.639512 kubelet[1998]: I0208 23:28:57.636636 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:28:57.639512 kubelet[1998]: I0208 23:28:57.637081 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-kube-api-access-5xrbr" (OuterVolumeSpecName: "kube-api-access-5xrbr") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "kube-api-access-5xrbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:28:57.639512 kubelet[1998]: I0208 23:28:57.638228 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:28:57.639512 kubelet[1998]: I0208 23:28:57.639321 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:28:57.637437 systemd[1]: var-lib-kubelet-pods-3ecaeaeb\x2d3970\x2d4e0c\x2d9521\x2d2543b554a82e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:28:57.639108 systemd[1]: var-lib-kubelet-pods-3ecaeaeb\x2d3970\x2d4e0c\x2d9521\x2d2543b554a82e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xrbr.mount: Deactivated successfully. Feb 8 23:28:57.639188 systemd[1]: var-lib-kubelet-pods-3ecaeaeb\x2d3970\x2d4e0c\x2d9521\x2d2543b554a82e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:28:57.640020 kubelet[1998]: I0208 23:28:57.639961 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:28:57.734178 kubelet[1998]: I0208 23:28:57.734034 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-cgroup\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.734178 kubelet[1998]: I0208 23:28:57.734089 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-run\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.734178 kubelet[1998]: I0208 23:28:57.734107 1998 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-xtables-lock\") pod \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\" (UID: \"3ecaeaeb-3970-4e0c-9521-2543b554a82e\") " Feb 8 23:28:57.734178 kubelet[1998]: I0208 23:28:57.734141 1998 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5xrbr\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-kube-api-access-5xrbr\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734178 kubelet[1998]: I0208 23:28:57.734141 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.734399 kubelet[1998]: I0208 23:28:57.734254 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.734699 kubelet[1998]: I0208 23:28:57.734140 1998 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ecaeaeb-3970-4e0c-9521-2543b554a82e" (UID: "3ecaeaeb-3970-4e0c-9521-2543b554a82e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:28:57.734734 kubelet[1998]: I0208 23:28:57.734151 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734759 kubelet[1998]: I0208 23:28:57.734732 1998 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734759 kubelet[1998]: I0208 23:28:57.734749 1998 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734806 kubelet[1998]: I0208 23:28:57.734762 1998 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734806 kubelet[1998]: I0208 23:28:57.734774 1998 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734806 kubelet[1998]: I0208 23:28:57.734786 1998 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734806 kubelet[1998]: I0208 23:28:57.734797 1998 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ecaeaeb-3970-4e0c-9521-2543b554a82e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.734897 kubelet[1998]: I0208 23:28:57.734812 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.835732 kubelet[1998]: I0208 23:28:57.835670 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.835732 kubelet[1998]: I0208 23:28:57.835714 1998 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.835732 kubelet[1998]: I0208 23:28:57.835728 1998 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ecaeaeb-3970-4e0c-9521-2543b554a82e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 8 23:28:57.931385 systemd[1]: var-lib-kubelet-pods-3ecaeaeb\x2d3970\x2d4e0c\x2d9521\x2d2543b554a82e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:28:58.247678 systemd[1]: Removed slice kubepods-burstable-pod3ecaeaeb_3970_4e0c_9521_2543b554a82e.slice. Feb 8 23:28:58.391372 kubelet[1998]: I0208 23:28:58.391339 1998 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:28:58.391283668 +0000 UTC m=+112.257292790 LastTransitionTime:2024-02-08 23:28:58.391283668 +0000 UTC m=+112.257292790 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:28:58.463574 kubelet[1998]: I0208 23:28:58.463539 1998 scope.go:115] "RemoveContainer" containerID="bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa" Feb 8 23:28:58.465049 env[1117]: time="2024-02-08T23:28:58.464682120Z" level=info msg="RemoveContainer for \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\"" Feb 8 23:28:58.545654 env[1117]: time="2024-02-08T23:28:58.545529581Z" level=info msg="RemoveContainer for \"bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa\" returns successfully" Feb 8 23:28:58.757469 kubelet[1998]: I0208 23:28:58.757429 1998 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:28:58.757880 kubelet[1998]: E0208 23:28:58.757511 1998 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ecaeaeb-3970-4e0c-9521-2543b554a82e" containerName="mount-cgroup" Feb 8 23:28:58.757880 kubelet[1998]: I0208 23:28:58.757543 1998 memory_manager.go:346] "RemoveStaleState removing state" podUID="3ecaeaeb-3970-4e0c-9521-2543b554a82e" containerName="mount-cgroup" Feb 8 23:28:58.762394 systemd[1]: Created slice kubepods-burstable-poda3ce418d_fbec_4ffc_8bed_63afdb91be6c.slice. Feb 8 23:28:58.841693 kubelet[1998]: I0208 23:28:58.841565 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-lib-modules\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.841693 kubelet[1998]: I0208 23:28:58.841624 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-bpf-maps\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.841879 kubelet[1998]: I0208 23:28:58.841776 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-clustermesh-secrets\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.841879 kubelet[1998]: I0208 23:28:58.841849 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-cilium-run\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.841955 kubelet[1998]: I0208 23:28:58.841916 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-cilium-cgroup\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842028 kubelet[1998]: I0208 23:28:58.841955 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-host-proc-sys-net\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842130 kubelet[1998]: I0208 23:28:58.842047 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-cni-path\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842130 kubelet[1998]: I0208 23:28:58.842076 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hg2\" (UniqueName: \"kubernetes.io/projected/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-kube-api-access-c5hg2\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842130 kubelet[1998]: I0208 23:28:58.842103 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-hostproc\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842255 kubelet[1998]: I0208 23:28:58.842133 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-etc-cni-netd\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842255 kubelet[1998]: I0208 23:28:58.842159 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-hubble-tls\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842255 kubelet[1998]: I0208 23:28:58.842197 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-host-proc-sys-kernel\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842255 kubelet[1998]: I0208 23:28:58.842226 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-xtables-lock\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842255 kubelet[1998]: I0208 23:28:58.842252 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-cilium-ipsec-secrets\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:58.842418 kubelet[1998]: I0208 23:28:58.842277 1998 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3ce418d-fbec-4ffc-8bed-63afdb91be6c-cilium-config-path\") pod \"cilium-b92zf\" (UID: \"a3ce418d-fbec-4ffc-8bed-63afdb91be6c\") " pod="kube-system/cilium-b92zf" Feb 8 23:28:59.064545 kubelet[1998]: E0208 23:28:59.064502 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:59.065079 env[1117]: time="2024-02-08T23:28:59.065029564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b92zf,Uid:a3ce418d-fbec-4ffc-8bed-63afdb91be6c,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:59.078025 env[1117]: time="2024-02-08T23:28:59.077938243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:59.078025 env[1117]: time="2024-02-08T23:28:59.078013796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:59.078220 env[1117]: time="2024-02-08T23:28:59.078040117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:59.078258 env[1117]: time="2024-02-08T23:28:59.078199250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676 pid=4124 runtime=io.containerd.runc.v2 Feb 8 23:28:59.088283 systemd[1]: Started cri-containerd-f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676.scope. Feb 8 23:28:59.108155 env[1117]: time="2024-02-08T23:28:59.107877274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b92zf,Uid:a3ce418d-fbec-4ffc-8bed-63afdb91be6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\"" Feb 8 23:28:59.108648 kubelet[1998]: E0208 23:28:59.108628 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:59.112426 env[1117]: time="2024-02-08T23:28:59.112372490Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:28:59.128585 env[1117]: time="2024-02-08T23:28:59.128511892Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9\"" Feb 8 23:28:59.129102 env[1117]: time="2024-02-08T23:28:59.129067864Z" level=info msg="StartContainer for \"00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9\"" Feb 8 23:28:59.142732 systemd[1]: Started cri-containerd-00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9.scope. Feb 8 23:28:59.164871 env[1117]: time="2024-02-08T23:28:59.164823432Z" level=info msg="StartContainer for \"00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9\" returns successfully" Feb 8 23:28:59.169836 systemd[1]: cri-containerd-00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9.scope: Deactivated successfully. Feb 8 23:28:59.198821 env[1117]: time="2024-02-08T23:28:59.198762916Z" level=info msg="shim disconnected" id=00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9 Feb 8 23:28:59.199019 env[1117]: time="2024-02-08T23:28:59.198825645Z" level=warning msg="cleaning up after shim disconnected" id=00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9 namespace=k8s.io Feb 8 23:28:59.199019 env[1117]: time="2024-02-08T23:28:59.198836796Z" level=info msg="cleaning up dead shim" Feb 8 23:28:59.205388 env[1117]: time="2024-02-08T23:28:59.205353530Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\n" Feb 8 23:28:59.467998 kubelet[1998]: E0208 23:28:59.467279 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:28:59.468947 env[1117]: time="2024-02-08T23:28:59.468902688Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:28:59.528991 env[1117]: time="2024-02-08T23:28:59.528901611Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564\"" Feb 8 23:28:59.529581 env[1117]: time="2024-02-08T23:28:59.529536883Z" level=info msg="StartContainer for \"1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564\"" Feb 8 23:28:59.573573 systemd[1]: Started cri-containerd-1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564.scope. Feb 8 23:28:59.594186 env[1117]: time="2024-02-08T23:28:59.594136403Z" level=info msg="StartContainer for \"1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564\" returns successfully" Feb 8 23:28:59.597485 systemd[1]: cri-containerd-1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564.scope: Deactivated successfully. Feb 8 23:28:59.614226 env[1117]: time="2024-02-08T23:28:59.614160307Z" level=info msg="shim disconnected" id=1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564 Feb 8 23:28:59.614226 env[1117]: time="2024-02-08T23:28:59.614213849Z" level=warning msg="cleaning up after shim disconnected" id=1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564 namespace=k8s.io Feb 8 23:28:59.614226 env[1117]: time="2024-02-08T23:28:59.614222205Z" level=info msg="cleaning up dead shim" Feb 8 23:28:59.620144 env[1117]: time="2024-02-08T23:28:59.620117373Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4266 runtime=io.containerd.runc.v2\n" Feb 8 23:28:59.684709 kubelet[1998]: W0208 23:28:59.684660 1998 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ecaeaeb_3970_4e0c_9521_2543b554a82e.slice/cri-containerd-bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa.scope WatchSource:0}: container "bed4b62a4f0493084218d3e90a48d878ff5eaad8ac25457cd623d7386f93f7fa" in namespace "k8s.io": not found Feb 8 23:29:00.242881 kubelet[1998]: I0208 23:29:00.242850 1998 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3ecaeaeb-3970-4e0c-9521-2543b554a82e path="/var/lib/kubelet/pods/3ecaeaeb-3970-4e0c-9521-2543b554a82e/volumes" Feb 8 23:29:00.470042 kubelet[1998]: E0208 23:29:00.470011 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:00.471555 env[1117]: time="2024-02-08T23:29:00.471514120Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:29:00.481792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940005939.mount: Deactivated successfully. Feb 8 23:29:00.490310 env[1117]: time="2024-02-08T23:29:00.490256665Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433\"" Feb 8 23:29:00.490806 env[1117]: time="2024-02-08T23:29:00.490773231Z" level=info msg="StartContainer for \"bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433\"" Feb 8 23:29:00.506046 systemd[1]: Started cri-containerd-bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433.scope. Feb 8 23:29:00.528012 env[1117]: time="2024-02-08T23:29:00.527942720Z" level=info msg="StartContainer for \"bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433\" returns successfully" Feb 8 23:29:00.528660 systemd[1]: cri-containerd-bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433.scope: Deactivated successfully. Feb 8 23:29:00.555940 env[1117]: time="2024-02-08T23:29:00.555888748Z" level=info msg="shim disconnected" id=bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433 Feb 8 23:29:00.555940 env[1117]: time="2024-02-08T23:29:00.555934224Z" level=warning msg="cleaning up after shim disconnected" id=bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433 namespace=k8s.io Feb 8 23:29:00.555940 env[1117]: time="2024-02-08T23:29:00.555943301Z" level=info msg="cleaning up dead shim" Feb 8 23:29:00.562421 env[1117]: time="2024-02-08T23:29:00.562379754Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Feb 8 23:29:00.946765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433-rootfs.mount: Deactivated successfully. Feb 8 23:29:01.272369 kubelet[1998]: E0208 23:29:01.272275 1998 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:29:01.473081 kubelet[1998]: E0208 23:29:01.473055 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:01.475318 env[1117]: time="2024-02-08T23:29:01.475285226Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:29:01.492881 env[1117]: time="2024-02-08T23:29:01.492834260Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405\"" Feb 8 23:29:01.493329 env[1117]: time="2024-02-08T23:29:01.493308946Z" level=info msg="StartContainer for \"6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405\"" Feb 8 23:29:01.509297 systemd[1]: Started cri-containerd-6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405.scope. Feb 8 23:29:01.528838 systemd[1]: cri-containerd-6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405.scope: Deactivated successfully. Feb 8 23:29:01.531023 env[1117]: time="2024-02-08T23:29:01.530987287Z" level=info msg="StartContainer for \"6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405\" returns successfully" Feb 8 23:29:01.549436 env[1117]: time="2024-02-08T23:29:01.549380112Z" level=info msg="shim disconnected" id=6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405 Feb 8 23:29:01.549436 env[1117]: time="2024-02-08T23:29:01.549432731Z" level=warning msg="cleaning up after shim disconnected" id=6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405 namespace=k8s.io Feb 8 23:29:01.549436 env[1117]: time="2024-02-08T23:29:01.549441919Z" level=info msg="cleaning up dead shim" Feb 8 23:29:01.558411 env[1117]: time="2024-02-08T23:29:01.558374878Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4379 runtime=io.containerd.runc.v2\n" Feb 8 23:29:01.947370 systemd[1]: run-containerd-runc-k8s.io-6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405-runc.mIr8za.mount: Deactivated successfully. Feb 8 23:29:01.947476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405-rootfs.mount: Deactivated successfully. Feb 8 23:29:02.476404 kubelet[1998]: E0208 23:29:02.476379 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:02.478255 env[1117]: time="2024-02-08T23:29:02.478207948Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:29:02.730157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450665173.mount: Deactivated successfully. Feb 8 23:29:02.792526 kubelet[1998]: W0208 23:29:02.792490 1998 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ce418d_fbec_4ffc_8bed_63afdb91be6c.slice/cri-containerd-00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9.scope WatchSource:0}: task 00c30c2e7307ba23a6c1fdbaa26ebf4920c37dc87c71e1fb748cfbae645ba6b9 not found: not found Feb 8 23:29:02.967620 env[1117]: time="2024-02-08T23:29:02.967551540Z" level=info msg="CreateContainer within sandbox \"f019753597d53065b6f43ded80ade108457e93746d1dabb9098305289f6c6676\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1\"" Feb 8 23:29:02.968389 env[1117]: time="2024-02-08T23:29:02.968356596Z" level=info msg="StartContainer for \"e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1\"" Feb 8 23:29:02.984401 systemd[1]: Started cri-containerd-e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1.scope. Feb 8 23:29:03.014780 env[1117]: time="2024-02-08T23:29:03.014730032Z" level=info msg="StartContainer for \"e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1\" returns successfully" Feb 8 23:29:03.027899 systemd[1]: run-containerd-runc-k8s.io-e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1-runc.RtRDiu.mount: Deactivated successfully. Feb 8 23:29:03.253994 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:29:03.481271 kubelet[1998]: E0208 23:29:03.481226 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:03.493183 kubelet[1998]: I0208 23:29:03.492772 1998 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b92zf" podStartSLOduration=5.492727732 pod.CreationTimestamp="2024-02-08 23:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:29:03.492283344 +0000 UTC m=+117.358292466" watchObservedRunningTime="2024-02-08 23:29:03.492727732 +0000 UTC m=+117.358736854" Feb 8 23:29:04.171260 systemd[1]: run-containerd-runc-k8s.io-e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1-runc.J4SDYx.mount: Deactivated successfully. Feb 8 23:29:04.482830 kubelet[1998]: E0208 23:29:04.482804 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:05.484479 kubelet[1998]: E0208 23:29:05.484446 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:05.652662 systemd-networkd[1019]: lxc_health: Link UP Feb 8 23:29:05.662457 systemd-networkd[1019]: lxc_health: Gained carrier Feb 8 23:29:05.664766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:29:05.899475 kubelet[1998]: W0208 23:29:05.899344 1998 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ce418d_fbec_4ffc_8bed_63afdb91be6c.slice/cri-containerd-1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564.scope WatchSource:0}: task 1ec8df80bd73753d3657b6109cdd148c03f0095a276cf932821e725157911564 not found: not found Feb 8 23:29:06.216318 env[1117]: time="2024-02-08T23:29:06.216085881Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:29:06.216318 env[1117]: time="2024-02-08T23:29:06.216189969Z" level=info msg="TearDown network for sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" successfully" Feb 8 23:29:06.216318 env[1117]: time="2024-02-08T23:29:06.216231529Z" level=info msg="StopPodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" returns successfully" Feb 8 23:29:06.216803 env[1117]: time="2024-02-08T23:29:06.216568713Z" level=info msg="RemovePodSandbox for \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:29:06.216803 env[1117]: time="2024-02-08T23:29:06.216591577Z" level=info msg="Forcibly stopping sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\"" Feb 8 23:29:06.216803 env[1117]: time="2024-02-08T23:29:06.216654788Z" level=info msg="TearDown network for sandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" successfully" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.278957249Z" level=info msg="RemovePodSandbox \"d8f4516854a980963d5d1a0a297565f0a919f628be2ceab7e2b25a49e978014b\" returns successfully" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.279889900Z" level=info msg="StopPodSandbox for \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\"" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.279983779Z" level=info msg="TearDown network for sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" successfully" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.280021621Z" level=info msg="StopPodSandbox for \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" returns successfully" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.280315332Z" level=info msg="RemovePodSandbox for \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\"" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.280338226Z" level=info msg="Forcibly stopping sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\"" Feb 8 23:29:06.280633 env[1117]: time="2024-02-08T23:29:06.280404373Z" level=info msg="TearDown network for sandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" successfully" Feb 8 23:29:06.279077 systemd[1]: run-containerd-runc-k8s.io-e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1-runc.dIUbaA.mount: Deactivated successfully. Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.284561173Z" level=info msg="RemovePodSandbox \"e9388ffe80db45aa5f7681204f25a668cf0adca67389d35f2d2f0725d5a3abcf\" returns successfully" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285051960Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285135279Z" level=info msg="TearDown network for sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" successfully" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285176958Z" level=info msg="StopPodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" returns successfully" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285474547Z" level=info msg="RemovePodSandbox for \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285496629Z" level=info msg="Forcibly stopping sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\"" Feb 8 23:29:06.286346 env[1117]: time="2024-02-08T23:29:06.285556734Z" level=info msg="TearDown network for sandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" successfully" Feb 8 23:29:06.292519 env[1117]: time="2024-02-08T23:29:06.292482613Z" level=info msg="RemovePodSandbox \"80ed5e598f0164925c20bbde49ff06b7fd0a5fe674ff898c4ab2fdf1d6ce620c\" returns successfully" Feb 8 23:29:06.824065 systemd-networkd[1019]: lxc_health: Gained IPv6LL Feb 8 23:29:07.066321 kubelet[1998]: E0208 23:29:07.066286 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:07.487792 kubelet[1998]: E0208 23:29:07.487765 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 8 23:29:08.442912 systemd[1]: run-containerd-runc-k8s.io-e000508522af2d0b36db6c952447bb80196b74bdfff0ce63ec95fa150069efa1-runc.WE4dMx.mount: Deactivated successfully. Feb 8 23:29:09.006456 kubelet[1998]: W0208 23:29:09.006409 1998 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ce418d_fbec_4ffc_8bed_63afdb91be6c.slice/cri-containerd-bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433.scope WatchSource:0}: task bd2282f1ce1977279a7324f4b930ffc6fd518013e87c2b72d09f465b00fd9433 not found: not found Feb 8 23:29:12.117383 kubelet[1998]: W0208 23:29:12.117332 1998 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ce418d_fbec_4ffc_8bed_63afdb91be6c.slice/cri-containerd-6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405.scope WatchSource:0}: task 6e8164a0d8cdf3df4271179b9894a675e949b3749c32e5f3c76e839b3e4f4405 not found: not found Feb 8 23:29:12.652653 sshd[3990]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:12.655111 systemd[1]: sshd@27-10.0.0.113:22-10.0.0.1:39132.service: Deactivated successfully. Feb 8 23:29:12.655803 systemd[1]: session-28.scope: Deactivated successfully. Feb 8 23:29:12.656267 systemd-logind[1104]: Session 28 logged out. Waiting for processes to exit. Feb 8 23:29:12.656999 systemd-logind[1104]: Removed session 28. Feb 8 23:29:13.242157 kubelet[1998]: E0208 23:29:13.242107 1998 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"