Apr 12 18:47:48.664128 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:47:48.664161 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:47:48.664172 kernel: BIOS-provided physical RAM map: Apr 12 18:47:48.664180 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 12 18:47:48.664187 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 12 18:47:48.664194 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 12 18:47:48.664203 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Apr 12 18:47:48.664211 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Apr 12 18:47:48.664221 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 12 18:47:48.664227 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 12 18:47:48.664235 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 12 18:47:48.664242 kernel: NX (Execute Disable) protection: active Apr 12 18:47:48.664250 kernel: SMBIOS 2.8 present. Apr 12 18:47:48.664257 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 12 18:47:48.664269 kernel: Hypervisor detected: KVM Apr 12 18:47:48.664277 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:47:48.664285 kernel: kvm-clock: cpu 0, msr 99191001, primary cpu clock Apr 12 18:47:48.664638 kernel: kvm-clock: using sched offset of 5635170247 cycles Apr 12 18:47:48.664652 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:47:48.664661 kernel: tsc: Detected 2794.748 MHz processor Apr 12 18:47:48.664669 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:47:48.664726 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:47:48.664741 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Apr 12 18:47:48.664754 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:47:48.664762 kernel: Using GB pages for direct mapping Apr 12 18:47:48.664770 kernel: ACPI: Early table checksum verification disabled Apr 12 18:47:48.664827 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Apr 12 18:47:48.664836 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664845 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664853 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664861 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 12 18:47:48.664916 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664932 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664940 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:47:48.664949 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Apr 12 18:47:48.664957 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Apr 12 18:47:48.665015 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 12 18:47:48.665025 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Apr 12 18:47:48.665033 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Apr 12 18:47:48.665041 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Apr 12 18:47:48.665104 kernel: No NUMA configuration found Apr 12 18:47:48.665114 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Apr 12 18:47:48.665122 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Apr 12 18:47:48.665130 kernel: Zone ranges: Apr 12 18:47:48.665138 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:47:48.665146 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Apr 12 18:47:48.665207 kernel: Normal empty Apr 12 18:47:48.665216 kernel: Movable zone start for each node Apr 12 18:47:48.665225 kernel: Early memory node ranges Apr 12 18:47:48.665234 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 12 18:47:48.665528 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Apr 12 18:47:48.665590 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Apr 12 18:47:48.665600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:47:48.665609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 12 18:47:48.665618 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Apr 12 18:47:48.665712 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 12 18:47:48.665722 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:47:48.665731 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:47:48.665793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 18:47:48.665807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:47:48.665816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:47:48.665825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:47:48.665834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:47:48.665892 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:47:48.665906 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:47:48.665914 kernel: TSC deadline timer available Apr 12 18:47:48.665921 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 12 18:47:48.665929 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 12 18:47:48.665936 kernel: kvm-guest: setup PV sched yield Apr 12 18:47:48.666002 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Apr 12 18:47:48.666012 kernel: Booting paravirtualized kernel on KVM Apr 12 18:47:48.666021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:47:48.666030 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Apr 12 18:47:48.666096 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Apr 12 18:47:48.666107 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Apr 12 18:47:48.666116 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 12 18:47:48.666124 kernel: kvm-guest: setup async PF for cpu 0 Apr 12 18:47:48.666185 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Apr 12 18:47:48.666198 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:47:48.666207 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:47:48.666216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Apr 12 18:47:48.666224 kernel: Policy zone: DMA32 Apr 12 18:47:48.666286 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:47:48.666322 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:47:48.666381 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:47:48.666393 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:47:48.666402 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:47:48.666412 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 134792K reserved, 0K cma-reserved) Apr 12 18:47:48.666421 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:47:48.666485 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:47:48.666498 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:47:48.666507 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:47:48.666517 kernel: rcu: RCU event tracing is enabled. Apr 12 18:47:48.666641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:47:48.666651 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:47:48.666659 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:47:48.666668 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:47:48.666676 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:47:48.666684 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 12 18:47:48.666697 kernel: random: crng init done Apr 12 18:47:48.666705 kernel: Console: colour VGA+ 80x25 Apr 12 18:47:48.666714 kernel: printk: console [ttyS0] enabled Apr 12 18:47:48.666722 kernel: ACPI: Core revision 20210730 Apr 12 18:47:48.666730 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 12 18:47:48.666739 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:47:48.666747 kernel: x2apic enabled Apr 12 18:47:48.666755 kernel: Switched APIC routing to physical x2apic. Apr 12 18:47:48.666763 kernel: kvm-guest: setup PV IPIs Apr 12 18:47:48.666773 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 12 18:47:48.666783 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 12 18:47:48.666791 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 12 18:47:48.666800 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 12 18:47:48.666808 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 12 18:47:48.666817 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 12 18:47:48.666831 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:47:48.666842 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:47:48.666852 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:47:48.666861 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:47:48.666881 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 12 18:47:48.666891 kernel: RETBleed: Mitigation: untrained return thunk Apr 12 18:47:48.666904 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:47:48.666913 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:47:48.666922 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:47:48.666930 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:47:48.666938 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:47:48.666947 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:47:48.666956 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:47:48.666967 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:47:48.666976 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:47:48.666985 kernel: LSM: Security Framework initializing Apr 12 18:47:48.666994 kernel: SELinux: Initializing. Apr 12 18:47:48.667002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:47:48.667011 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:47:48.667020 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 12 18:47:48.667031 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 12 18:47:48.667040 kernel: ... version: 0 Apr 12 18:47:48.667049 kernel: ... bit width: 48 Apr 12 18:47:48.667058 kernel: ... generic registers: 6 Apr 12 18:47:48.667066 kernel: ... value mask: 0000ffffffffffff Apr 12 18:47:48.667075 kernel: ... max period: 00007fffffffffff Apr 12 18:47:48.667083 kernel: ... fixed-purpose events: 0 Apr 12 18:47:48.667092 kernel: ... event mask: 000000000000003f Apr 12 18:47:48.667102 kernel: signal: max sigframe size: 1776 Apr 12 18:47:48.667113 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:47:48.667123 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:47:48.667132 kernel: x86: Booting SMP configuration: Apr 12 18:47:48.667141 kernel: .... node #0, CPUs: #1 Apr 12 18:47:48.667150 kernel: kvm-clock: cpu 1, msr 99191041, secondary cpu clock Apr 12 18:47:48.667158 kernel: kvm-guest: setup async PF for cpu 1 Apr 12 18:47:48.667167 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Apr 12 18:47:48.667176 kernel: #2 Apr 12 18:47:48.667186 kernel: kvm-clock: cpu 2, msr 99191081, secondary cpu clock Apr 12 18:47:48.667194 kernel: kvm-guest: setup async PF for cpu 2 Apr 12 18:47:48.667206 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Apr 12 18:47:48.667215 kernel: #3 Apr 12 18:47:48.667224 kernel: kvm-clock: cpu 3, msr 991910c1, secondary cpu clock Apr 12 18:47:48.667232 kernel: kvm-guest: setup async PF for cpu 3 Apr 12 18:47:48.667242 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Apr 12 18:47:48.667252 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:47:48.667262 kernel: smpboot: Max logical packages: 1 Apr 12 18:47:48.667271 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 12 18:47:48.667281 kernel: devtmpfs: initialized Apr 12 18:47:48.667325 kernel: x86/mm: Memory block size: 128MB Apr 12 18:47:48.667335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:47:48.667345 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:47:48.667354 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:47:48.667363 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:47:48.667372 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:47:48.667382 kernel: audit: type=2000 audit(1712947667.738:1): state=initialized audit_enabled=0 res=1 Apr 12 18:47:48.667392 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:47:48.667401 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:47:48.667414 kernel: cpuidle: using governor menu Apr 12 18:47:48.667423 kernel: ACPI: bus type PCI registered Apr 12 18:47:48.667431 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:47:48.667440 kernel: dca service started, version 1.12.1 Apr 12 18:47:48.667449 kernel: PCI: Using configuration type 1 for base access Apr 12 18:47:48.667459 kernel: PCI: Using configuration type 1 for extended access Apr 12 18:47:48.667468 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:47:48.667478 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:47:48.667488 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:47:48.667501 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:47:48.667511 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:47:48.667520 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:47:48.667530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:47:48.667540 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:47:48.667550 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:47:48.667559 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:47:48.667568 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:47:48.667577 kernel: ACPI: Interpreter enabled Apr 12 18:47:48.667590 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:47:48.667600 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:47:48.667609 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:47:48.667618 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 12 18:47:48.667627 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:47:48.667852 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:47:48.667871 kernel: acpiphp: Slot [3] registered Apr 12 18:47:48.667880 kernel: acpiphp: Slot [4] registered Apr 12 18:47:48.667894 kernel: acpiphp: Slot [5] registered Apr 12 18:47:48.667902 kernel: acpiphp: Slot [6] registered Apr 12 18:47:48.667912 kernel: acpiphp: Slot [7] registered Apr 12 18:47:48.667921 kernel: acpiphp: Slot [8] registered Apr 12 18:47:48.667930 kernel: acpiphp: Slot [9] registered Apr 12 18:47:48.667941 kernel: acpiphp: Slot [10] registered Apr 12 18:47:48.667951 kernel: acpiphp: Slot [11] registered Apr 12 18:47:48.667961 kernel: acpiphp: Slot [12] registered Apr 12 18:47:48.667970 kernel: acpiphp: Slot [13] registered Apr 12 18:47:48.667980 kernel: acpiphp: Slot [14] registered Apr 12 18:47:48.667992 kernel: acpiphp: Slot [15] registered Apr 12 18:47:48.668001 kernel: acpiphp: Slot [16] registered Apr 12 18:47:48.668010 kernel: acpiphp: Slot [17] registered Apr 12 18:47:48.668020 kernel: acpiphp: Slot [18] registered Apr 12 18:47:48.668029 kernel: acpiphp: Slot [19] registered Apr 12 18:47:48.668038 kernel: acpiphp: Slot [20] registered Apr 12 18:47:48.668047 kernel: acpiphp: Slot [21] registered Apr 12 18:47:48.668056 kernel: acpiphp: Slot [22] registered Apr 12 18:47:48.668066 kernel: acpiphp: Slot [23] registered Apr 12 18:47:48.668078 kernel: acpiphp: Slot [24] registered Apr 12 18:47:48.668088 kernel: acpiphp: Slot [25] registered Apr 12 18:47:48.668097 kernel: acpiphp: Slot [26] registered Apr 12 18:47:48.668106 kernel: acpiphp: Slot [27] registered Apr 12 18:47:48.668116 kernel: acpiphp: Slot [28] registered Apr 12 18:47:48.668125 kernel: acpiphp: Slot [29] registered Apr 12 18:47:48.668134 kernel: acpiphp: Slot [30] registered Apr 12 18:47:48.668143 kernel: acpiphp: Slot [31] registered Apr 12 18:47:48.668153 kernel: PCI host bridge to bus 0000:00 Apr 12 18:47:48.668328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:47:48.668448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:47:48.668586 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:47:48.668706 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Apr 12 18:47:48.668819 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 12 18:47:48.668923 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:47:48.669156 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:47:48.669458 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:47:48.669612 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 12 18:47:48.669736 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Apr 12 18:47:48.669852 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 12 18:47:48.669973 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 12 18:47:48.670147 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 12 18:47:48.670282 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 12 18:47:48.670475 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:47:48.670601 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 12 18:47:48.670861 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 12 18:47:48.671161 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Apr 12 18:47:48.671460 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 12 18:47:48.671689 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 12 18:47:48.672027 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 12 18:47:48.672160 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:47:48.672339 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:47:48.672439 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:47:48.672571 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 12 18:47:48.672691 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 12 18:47:48.672839 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 12 18:47:48.672956 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 12 18:47:48.673084 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 12 18:47:48.673191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 12 18:47:48.673336 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:47:48.673450 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Apr 12 18:47:48.673598 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 12 18:47:48.673741 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 12 18:47:48.673888 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 12 18:47:48.673921 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:47:48.673931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:47:48.673942 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:47:48.673957 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:47:48.673966 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:47:48.673975 kernel: iommu: Default domain type: Translated Apr 12 18:47:48.673984 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:47:48.674164 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 12 18:47:48.674365 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:47:48.674496 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 12 18:47:48.674509 kernel: vgaarb: loaded Apr 12 18:47:48.674532 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:47:48.674544 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:47:48.674553 kernel: PTP clock support registered Apr 12 18:47:48.674563 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:47:48.674573 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:47:48.674589 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 12 18:47:48.674598 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Apr 12 18:47:48.674607 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 12 18:47:48.674624 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 12 18:47:48.674638 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:47:48.674647 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:47:48.674656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:47:48.674665 kernel: pnp: PnP ACPI init Apr 12 18:47:48.674828 kernel: pnp 00:02: [dma 2] Apr 12 18:47:48.674856 kernel: pnp: PnP ACPI: found 6 devices Apr 12 18:47:48.674866 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:47:48.674875 kernel: NET: Registered PF_INET protocol family Apr 12 18:47:48.674884 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:47:48.674909 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:47:48.674919 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:47:48.674928 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:47:48.674937 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:47:48.674948 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:47:48.674957 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:47:48.674974 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:47:48.674987 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:47:48.674996 kernel: NET: Registered PF_XDP protocol family Apr 12 18:47:48.675134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:47:48.675240 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:47:48.675344 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:47:48.675426 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Apr 12 18:47:48.675503 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 12 18:47:48.675594 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 12 18:47:48.675690 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:47:48.675778 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:47:48.675789 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:47:48.675798 kernel: Initialise system trusted keyrings Apr 12 18:47:48.675807 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:47:48.675818 kernel: Key type asymmetric registered Apr 12 18:47:48.675827 kernel: Asymmetric key parser 'x509' registered Apr 12 18:47:48.675836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:47:48.675844 kernel: io scheduler mq-deadline registered Apr 12 18:47:48.675853 kernel: io scheduler kyber registered Apr 12 18:47:48.675862 kernel: io scheduler bfq registered Apr 12 18:47:48.675871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:47:48.675881 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:47:48.675889 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:47:48.675898 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:47:48.675909 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:47:48.675918 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:47:48.675927 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:47:48.675936 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:47:48.675944 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:47:48.676041 kernel: rtc_cmos 00:05: RTC can wake from S4 Apr 12 18:47:48.676054 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:47:48.676131 kernel: rtc_cmos 00:05: registered as rtc0 Apr 12 18:47:48.676229 kernel: rtc_cmos 00:05: setting system clock to 2024-04-12T18:47:47 UTC (1712947667) Apr 12 18:47:48.676377 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 12 18:47:48.676394 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:47:48.676405 kernel: Segment Routing with IPv6 Apr 12 18:47:48.676419 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:47:48.676431 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:47:48.676440 kernel: Key type dns_resolver registered Apr 12 18:47:48.676449 kernel: IPI shorthand broadcast: enabled Apr 12 18:47:48.676458 kernel: sched_clock: Marking stable (677542011, 130732023)->(1011627494, -203353460) Apr 12 18:47:48.676472 kernel: registered taskstats version 1 Apr 12 18:47:48.676483 kernel: Loading compiled-in X.509 certificates Apr 12 18:47:48.676493 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:47:48.676503 kernel: Key type .fscrypt registered Apr 12 18:47:48.676512 kernel: Key type fscrypt-provisioning registered Apr 12 18:47:48.676521 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:47:48.676530 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:47:48.676539 kernel: ima: No architecture policies found Apr 12 18:47:48.676549 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:47:48.676558 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:47:48.676567 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:47:48.676576 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:47:48.676585 kernel: Run /init as init process Apr 12 18:47:48.676593 kernel: with arguments: Apr 12 18:47:48.676602 kernel: /init Apr 12 18:47:48.676611 kernel: with environment: Apr 12 18:47:48.676631 kernel: HOME=/ Apr 12 18:47:48.676643 kernel: TERM=linux Apr 12 18:47:48.676652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:47:48.676664 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:47:48.676676 systemd[1]: Detected virtualization kvm. Apr 12 18:47:48.676686 systemd[1]: Detected architecture x86-64. Apr 12 18:47:48.676695 systemd[1]: Running in initrd. Apr 12 18:47:48.676704 systemd[1]: No hostname configured, using default hostname. Apr 12 18:47:48.676715 systemd[1]: Hostname set to . Apr 12 18:47:48.676725 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:47:48.676735 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:47:48.676744 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:47:48.676753 systemd[1]: Reached target cryptsetup.target. Apr 12 18:47:48.676763 systemd[1]: Reached target paths.target. Apr 12 18:47:48.676772 systemd[1]: Reached target slices.target. Apr 12 18:47:48.676782 systemd[1]: Reached target swap.target. Apr 12 18:47:48.676792 systemd[1]: Reached target timers.target. Apr 12 18:47:48.676803 systemd[1]: Listening on iscsid.socket. Apr 12 18:47:48.676813 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:47:48.676822 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:47:48.676832 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:47:48.676842 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:47:48.676851 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:47:48.676861 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:47:48.676872 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:47:48.676881 systemd[1]: Reached target sockets.target. Apr 12 18:47:48.676891 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:47:48.676900 systemd[1]: Finished network-cleanup.service. Apr 12 18:47:48.676910 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:47:48.676919 systemd[1]: Starting systemd-journald.service... Apr 12 18:47:48.676929 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:47:48.676940 systemd[1]: Starting systemd-resolved.service... Apr 12 18:47:48.676950 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:47:48.676959 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:47:48.676969 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:47:48.676978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:47:48.676988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:47:48.677002 systemd-journald[198]: Journal started Apr 12 18:47:48.677069 systemd-journald[198]: Runtime Journal (/run/log/journal/79af1e19303f4ec292a5d7e386b8172f) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:47:48.637044 systemd-modules-load[199]: Inserted module 'overlay' Apr 12 18:47:48.739940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:47:48.739982 kernel: Bridge firewalling registered Apr 12 18:47:48.739998 systemd[1]: Started systemd-journald.service. Apr 12 18:47:48.740017 kernel: audit: type=1130 audit(1712947668.713:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.740033 kernel: audit: type=1130 audit(1712947668.722:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.740048 kernel: audit: type=1130 audit(1712947668.730:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.686390 systemd-resolved[200]: Positive Trust Anchors: Apr 12 18:47:48.748672 kernel: audit: type=1130 audit(1712947668.739:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.686406 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:47:48.686446 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:47:48.689865 systemd-resolved[200]: Defaulting to hostname 'linux'. Apr 12 18:47:48.694951 systemd-modules-load[199]: Inserted module 'br_netfilter' Apr 12 18:47:48.723469 systemd[1]: Started systemd-resolved.service. Apr 12 18:47:48.734153 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:47:48.767073 kernel: SCSI subsystem initialized Apr 12 18:47:48.740457 systemd[1]: Reached target nss-lookup.target. Apr 12 18:47:48.751662 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:47:48.791686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:47:48.791752 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:47:48.791769 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:47:48.791950 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:47:48.823020 kernel: audit: type=1130 audit(1712947668.810:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.812108 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:47:48.832865 dracut-cmdline[217]: dracut-dracut-053 Apr 12 18:47:48.836341 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:47:48.836405 systemd-modules-load[199]: Inserted module 'dm_multipath' Apr 12 18:47:48.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.837498 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:47:48.848469 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:47:48.858389 kernel: audit: type=1130 audit(1712947668.846:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.870423 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:47:48.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:48.879371 kernel: audit: type=1130 audit(1712947668.871:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.006353 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:47:49.040911 kernel: iscsi: registered transport (tcp) Apr 12 18:47:49.082734 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:47:49.082826 kernel: QLogic iSCSI HBA Driver Apr 12 18:47:49.211408 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:47:49.224992 kernel: audit: type=1130 audit(1712947669.216:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.218105 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:47:49.308343 kernel: raid6: avx2x4 gen() 19718 MB/s Apr 12 18:47:49.337392 kernel: raid6: avx2x4 xor() 5233 MB/s Apr 12 18:47:49.347578 kernel: raid6: avx2x2 gen() 19855 MB/s Apr 12 18:47:49.365498 kernel: raid6: avx2x2 xor() 13891 MB/s Apr 12 18:47:49.381367 kernel: raid6: avx2x1 gen() 15017 MB/s Apr 12 18:47:49.398362 kernel: raid6: avx2x1 xor() 10458 MB/s Apr 12 18:47:49.418354 kernel: raid6: sse2x4 gen() 10025 MB/s Apr 12 18:47:49.439132 kernel: raid6: sse2x4 xor() 4464 MB/s Apr 12 18:47:49.456364 kernel: raid6: sse2x2 gen() 11267 MB/s Apr 12 18:47:49.483364 kernel: raid6: sse2x2 xor() 8096 MB/s Apr 12 18:47:49.500364 kernel: raid6: sse2x1 gen() 6156 MB/s Apr 12 18:47:49.518250 kernel: raid6: sse2x1 xor() 5861 MB/s Apr 12 18:47:49.518420 kernel: raid6: using algorithm avx2x2 gen() 19855 MB/s Apr 12 18:47:49.518454 kernel: raid6: .... xor() 13891 MB/s, rmw enabled Apr 12 18:47:49.519133 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:47:49.542342 kernel: xor: automatically using best checksumming function avx Apr 12 18:47:49.690375 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:47:49.712548 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:47:49.720533 kernel: audit: type=1130 audit(1712947669.712:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.720000 audit: BPF prog-id=7 op=LOAD Apr 12 18:47:49.720000 audit: BPF prog-id=8 op=LOAD Apr 12 18:47:49.721598 systemd[1]: Starting systemd-udevd.service... Apr 12 18:47:49.750468 systemd-udevd[401]: Using default interface naming scheme 'v252'. Apr 12 18:47:49.757412 systemd[1]: Started systemd-udevd.service. Apr 12 18:47:49.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.760457 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:47:49.791875 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 12 18:47:49.835669 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:47:49.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.837852 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:47:49.897987 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:47:49.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:49.980324 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:47:50.007312 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:47:50.094358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:47:50.094455 kernel: GPT:9289727 != 19775487 Apr 12 18:47:50.094468 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:47:50.094480 kernel: GPT:9289727 != 19775487 Apr 12 18:47:50.094506 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:47:50.094517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:47:50.347643 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:47:50.347761 kernel: AES CTR mode by8 optimization enabled Apr 12 18:47:50.354855 kernel: libata version 3.00 loaded. Apr 12 18:47:50.370321 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 12 18:47:50.377326 kernel: scsi host0: ata_piix Apr 12 18:47:50.390220 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:47:50.405510 kernel: scsi host1: ata_piix Apr 12 18:47:50.405727 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Apr 12 18:47:50.405745 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Apr 12 18:47:50.408707 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:47:50.416417 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Apr 12 18:47:50.421748 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:47:50.431644 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:47:50.447485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:47:50.460608 systemd[1]: Starting disk-uuid.service... Apr 12 18:47:50.486267 disk-uuid[523]: Primary Header is updated. Apr 12 18:47:50.486267 disk-uuid[523]: Secondary Entries is updated. Apr 12 18:47:50.486267 disk-uuid[523]: Secondary Header is updated. Apr 12 18:47:50.492856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:47:50.565356 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 12 18:47:50.565440 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 12 18:47:50.741623 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 12 18:47:50.741969 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:47:50.761991 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 12 18:47:51.513503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:47:51.517678 disk-uuid[524]: The operation has completed successfully. Apr 12 18:47:51.585310 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:47:51.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:51.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:51.585447 systemd[1]: Finished disk-uuid.service. Apr 12 18:47:51.587564 systemd[1]: Starting verity-setup.service... Apr 12 18:47:51.625331 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 12 18:47:51.686846 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:47:51.687962 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:47:51.703583 systemd[1]: Finished verity-setup.service. Apr 12 18:47:51.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:51.859341 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:47:51.867743 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:47:51.868131 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:47:51.877549 systemd[1]: Starting ignition-setup.service... Apr 12 18:47:51.878646 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:47:51.911538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:47:51.911616 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:47:51.911634 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:47:51.942447 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:47:51.965035 systemd[1]: Finished ignition-setup.service. Apr 12 18:47:51.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:51.967436 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:47:52.067941 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:47:52.084151 kernel: kauditd_printk_skb: 9 callbacks suppressed Apr 12 18:47:52.084190 kernel: audit: type=1130 audit(1712947672.067:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.090908 kernel: audit: type=1334 audit(1712947672.089:21): prog-id=9 op=LOAD Apr 12 18:47:52.089000 audit: BPF prog-id=9 op=LOAD Apr 12 18:47:52.097926 systemd[1]: Starting systemd-networkd.service... Apr 12 18:47:52.131825 ignition[641]: Ignition 2.14.0 Apr 12 18:47:52.131848 ignition[641]: Stage: fetch-offline Apr 12 18:47:52.131923 ignition[641]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:52.131936 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:52.132075 ignition[641]: parsed url from cmdline: "" Apr 12 18:47:52.132080 ignition[641]: no config URL provided Apr 12 18:47:52.132087 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:47:52.132096 ignition[641]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:47:52.132122 ignition[641]: op(1): [started] loading QEMU firmware config module Apr 12 18:47:52.132129 ignition[641]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:47:52.169075 ignition[641]: op(1): [finished] loading QEMU firmware config module Apr 12 18:47:52.194028 systemd-networkd[715]: lo: Link UP Apr 12 18:47:52.194050 systemd-networkd[715]: lo: Gained carrier Apr 12 18:47:52.204855 systemd-networkd[715]: Enumeration completed Apr 12 18:47:52.205260 systemd[1]: Started systemd-networkd.service. Apr 12 18:47:52.205394 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:47:52.222662 kernel: audit: type=1130 audit(1712947672.211:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.222113 systemd-networkd[715]: eth0: Link UP Apr 12 18:47:52.222119 systemd-networkd[715]: eth0: Gained carrier Apr 12 18:47:52.222628 systemd[1]: Reached target network.target. Apr 12 18:47:52.226899 systemd[1]: Starting iscsiuio.service... Apr 12 18:47:52.263548 systemd[1]: Started iscsiuio.service. Apr 12 18:47:52.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.274865 systemd[1]: Starting iscsid.service... Apr 12 18:47:52.285737 kernel: audit: type=1130 audit(1712947672.271:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.285776 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:47:52.285776 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:47:52.285776 iscsid[721]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:47:52.285776 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:47:52.285776 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:47:52.285776 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:47:52.285776 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:47:52.312147 kernel: audit: type=1130 audit(1712947672.288:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.285827 systemd[1]: Started iscsid.service. Apr 12 18:47:52.313965 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:47:52.332461 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:47:52.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.335403 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:47:52.344150 kernel: audit: type=1130 audit(1712947672.334:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.343919 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:47:52.355143 systemd[1]: Reached target remote-fs.target. Apr 12 18:47:52.367761 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:47:52.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.382455 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:47:52.399164 kernel: audit: type=1130 audit(1712947672.381:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.388023 ignition[641]: parsing config with SHA512: 9f0a21178d278b340771b881985111422b4d147c4fddacc7bd43e0712dad48a4588e0d0944057bf671b52d0be7c24bac54fe94b086f61e3f0660c5113c7d517b Apr 12 18:47:52.410470 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:47:52.463494 unknown[641]: fetched base config from "system" Apr 12 18:47:52.463516 unknown[641]: fetched user config from "qemu" Apr 12 18:47:52.464365 ignition[641]: fetch-offline: fetch-offline passed Apr 12 18:47:52.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.466801 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:47:52.475176 kernel: audit: type=1130 audit(1712947672.468:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.464460 ignition[641]: Ignition finished successfully Apr 12 18:47:52.469176 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:47:52.475508 systemd[1]: Starting ignition-kargs.service... Apr 12 18:47:52.491763 ignition[735]: Ignition 2.14.0 Apr 12 18:47:52.491779 ignition[735]: Stage: kargs Apr 12 18:47:52.491960 ignition[735]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:52.491980 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:52.493827 ignition[735]: kargs: kargs passed Apr 12 18:47:52.493896 ignition[735]: Ignition finished successfully Apr 12 18:47:52.523832 systemd[1]: Finished ignition-kargs.service. Apr 12 18:47:52.532507 kernel: audit: type=1130 audit(1712947672.527:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.528942 systemd[1]: Starting ignition-disks.service... Apr 12 18:47:52.558371 ignition[742]: Ignition 2.14.0 Apr 12 18:47:52.558396 ignition[742]: Stage: disks Apr 12 18:47:52.558618 ignition[742]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:52.558633 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:52.569598 ignition[742]: disks: disks passed Apr 12 18:47:52.570492 ignition[742]: Ignition finished successfully Apr 12 18:47:52.574995 systemd[1]: Finished ignition-disks.service. Apr 12 18:47:52.584789 kernel: audit: type=1130 audit(1712947672.574:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.575272 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:47:52.585452 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:47:52.589418 systemd[1]: Reached target local-fs.target. Apr 12 18:47:52.591561 systemd[1]: Reached target sysinit.target. Apr 12 18:47:52.592619 systemd[1]: Reached target basic.target. Apr 12 18:47:52.599744 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:47:52.642906 systemd-fsck[750]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:47:52.656642 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:47:52.657907 systemd[1]: Mounting sysroot.mount... Apr 12 18:47:52.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.676848 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:47:52.679617 systemd[1]: Mounted sysroot.mount. Apr 12 18:47:52.679890 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:47:52.693429 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:47:52.694100 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:47:52.694184 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:47:52.694249 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:47:52.708611 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:47:52.713494 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:47:52.722883 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:47:52.723221 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:47:52.739138 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:47:52.744084 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (762) Apr 12 18:47:52.744116 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:47:52.744130 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:47:52.744143 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:47:52.750542 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:47:52.757322 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:47:52.769003 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:47:52.832616 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:47:52.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.833825 systemd[1]: Starting ignition-mount.service... Apr 12 18:47:52.841000 systemd[1]: Starting sysroot-boot.service... Apr 12 18:47:52.852128 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:47:52.852274 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:47:52.890737 ignition[822]: INFO : Ignition 2.14.0 Apr 12 18:47:52.890737 ignition[822]: INFO : Stage: mount Apr 12 18:47:52.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.893213 systemd[1]: Finished sysroot-boot.service. Apr 12 18:47:52.904917 ignition[822]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:52.904917 ignition[822]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:52.904917 ignition[822]: INFO : mount: mount passed Apr 12 18:47:52.904917 ignition[822]: INFO : Ignition finished successfully Apr 12 18:47:52.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:52.900978 systemd[1]: Finished ignition-mount.service. Apr 12 18:47:52.907176 systemd[1]: Starting ignition-files.service... Apr 12 18:47:52.927075 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:47:52.945333 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (831) Apr 12 18:47:52.947912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:47:52.947965 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:47:52.947978 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:47:52.963231 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:47:52.982226 ignition[850]: INFO : Ignition 2.14.0 Apr 12 18:47:52.982226 ignition[850]: INFO : Stage: files Apr 12 18:47:52.985512 ignition[850]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:52.985512 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:52.993491 ignition[850]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:47:53.000410 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:47:53.000410 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:47:53.013130 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:47:53.015130 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:47:53.018469 unknown[850]: wrote ssh authorized keys file for user: core Apr 12 18:47:53.020012 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:47:53.026401 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:47:53.026401 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:47:53.157641 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:47:53.308769 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:47:53.312342 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:47:53.312342 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:47:53.523625 systemd-networkd[715]: eth0: Gained IPv6LL Apr 12 18:47:53.712970 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:47:53.899754 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:47:53.899754 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:47:53.906554 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:47:53.906554 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:47:54.172504 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:47:54.490514 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:47:54.490514 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:47:54.490514 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:47:54.529189 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:47:54.529189 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:47:54.529189 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:47:54.631580 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:47:55.201099 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: a2de71807eb4c41f4d70e5c47fac72ecf3c74984be6c08be0597fc58621baeeddc1b5cc6431ab007eee9bd0a98f8628dd21512b06daaeccfac5837e9792a98a7 Apr 12 18:47:55.201099 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:47:55.201099 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:47:55.201099 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:47:55.258303 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:47:56.602944 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: d3fef1d4b99415179ecb94d4de953bddb74c0fb0f798265829b899bb031e2ab8c2b60037b79a66405a9b102d3db0d90e9257595f4b11660356de0e2e63744cd7 Apr 12 18:47:56.607219 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:47:56.607219 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:47:56.607219 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:47:56.664157 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:47:56.965833 ignition[850]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 4261cb0319688a0557b3052cce8df9d754abc38d5fc8e0eeeb63a85a2194895fdca5bad464f8516459ed7b1764d7bbb2304f5f434d42bb35f38764b4b00ce663 Apr 12 18:47:56.965833 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:47:56.994802 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:47:56.994802 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:47:57.283903 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:47:57.440753 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:47:57.440753 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:47:57.464116 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Apr 12 18:47:57.464116 ignition[850]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:47:57.546535 ignition[850]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:47:57.628512 ignition[850]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:47:57.628512 ignition[850]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:47:57.628512 ignition[850]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:47:57.628512 ignition[850]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:47:57.628512 ignition[850]: INFO : files: files passed Apr 12 18:47:57.628512 ignition[850]: INFO : Ignition finished successfully Apr 12 18:47:57.654902 kernel: kauditd_printk_skb: 4 callbacks suppressed Apr 12 18:47:57.654944 kernel: audit: type=1130 audit(1712947677.635:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.631721 systemd[1]: Finished ignition-files.service. Apr 12 18:47:57.648212 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:47:57.650069 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:47:57.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.653747 systemd[1]: Starting ignition-quench.service... Apr 12 18:47:57.675149 initrd-setup-root-after-ignition[876]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:47:57.661891 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:47:57.675993 initrd-setup-root-after-ignition[878]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:47:57.662020 systemd[1]: Finished ignition-quench.service. Apr 12 18:47:57.675036 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:47:57.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.690171 kernel: audit: type=1130 audit(1712947677.665:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.690432 kernel: audit: type=1131 audit(1712947677.665:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.692177 systemd[1]: Reached target ignition-complete.target. Apr 12 18:47:57.698744 kernel: audit: type=1130 audit(1712947677.691:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.699874 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:47:57.734196 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:47:57.734356 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:47:57.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.741850 systemd[1]: Reached target initrd-fs.target. Apr 12 18:47:57.759570 kernel: audit: type=1130 audit(1712947677.741:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.759616 kernel: audit: type=1131 audit(1712947677.741:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.753281 systemd[1]: Reached target initrd.target. Apr 12 18:47:57.759393 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:47:57.765168 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:47:57.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.786461 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:47:57.794828 kernel: audit: type=1130 audit(1712947677.785:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.796280 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:47:57.820893 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:47:57.827459 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:47:57.834167 systemd[1]: Stopped target timers.target. Apr 12 18:47:57.837862 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:47:57.839189 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:47:57.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.854738 systemd[1]: Stopped target initrd.target. Apr 12 18:47:57.869399 kernel: audit: type=1131 audit(1712947677.853:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.876797 systemd[1]: Stopped target basic.target. Apr 12 18:47:57.880471 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:47:57.882903 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:47:57.885642 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:47:57.891400 systemd[1]: Stopped target remote-fs.target. Apr 12 18:47:57.893771 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:47:57.896306 systemd[1]: Stopped target sysinit.target. Apr 12 18:47:57.905738 systemd[1]: Stopped target local-fs.target. Apr 12 18:47:57.908007 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:47:57.917716 systemd[1]: Stopped target swap.target. Apr 12 18:47:57.922024 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:47:57.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.922241 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:47:57.936491 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:47:57.962771 kernel: audit: type=1131 audit(1712947677.931:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.962802 kernel: audit: type=1131 audit(1712947677.936:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.936642 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:47:57.936817 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:47:57.938584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:47:57.943525 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:47:57.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:57.991949 systemd[1]: Stopped target paths.target. Apr 12 18:47:58.007699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:47:58.020460 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:47:58.040407 systemd[1]: Stopped target slices.target. Apr 12 18:47:58.047826 systemd[1]: Stopped target sockets.target. Apr 12 18:47:58.057429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:47:58.058948 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:47:58.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.062653 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:47:58.064337 systemd[1]: Stopped ignition-files.service. Apr 12 18:47:58.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.067927 systemd[1]: Stopping ignition-mount.service... Apr 12 18:47:58.071425 systemd[1]: Stopping iscsid.service... Apr 12 18:47:58.076456 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:47:58.078106 iscsid[721]: iscsid shutting down. Apr 12 18:47:58.081348 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:47:58.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.081620 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:47:58.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.106119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:47:58.106366 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:47:58.112896 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:47:58.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.113033 systemd[1]: Stopped iscsid.service. Apr 12 18:47:58.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.172180 ignition[891]: INFO : Ignition 2.14.0 Apr 12 18:47:58.172180 ignition[891]: INFO : Stage: umount Apr 12 18:47:58.172180 ignition[891]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:47:58.172180 ignition[891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:47:58.172180 ignition[891]: INFO : umount: umount passed Apr 12 18:47:58.172180 ignition[891]: INFO : Ignition finished successfully Apr 12 18:47:58.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.126694 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:47:58.128320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:47:58.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.128416 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:47:58.138118 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:47:58.138221 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:47:58.146028 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:47:58.146098 systemd[1]: Closed iscsid.socket. Apr 12 18:47:58.150371 systemd[1]: Stopping iscsiuio.service... Apr 12 18:47:58.165943 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:47:58.166099 systemd[1]: Stopped iscsiuio.service. Apr 12 18:47:58.167308 systemd[1]: Stopped target network.target. Apr 12 18:47:58.171168 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:47:58.171227 systemd[1]: Closed iscsiuio.socket. Apr 12 18:47:58.172811 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:47:58.175734 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:47:58.177129 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:47:58.177233 systemd[1]: Stopped ignition-mount.service. Apr 12 18:47:58.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.178459 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:47:58.178512 systemd[1]: Stopped ignition-disks.service. Apr 12 18:47:58.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.180352 systemd-networkd[715]: eth0: DHCPv6 lease lost Apr 12 18:47:58.238000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:47:58.180398 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:47:58.180457 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:47:58.181718 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:47:58.181773 systemd[1]: Stopped ignition-setup.service. Apr 12 18:47:58.187595 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:47:58.187720 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:47:58.193112 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:47:58.193242 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:47:58.197668 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:47:58.197783 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:47:58.204537 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:47:58.204602 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:47:58.219309 systemd[1]: Stopping network-cleanup.service... Apr 12 18:47:58.222352 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:47:58.222442 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:47:58.275000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:47:58.230436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:47:58.230521 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:47:58.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.278056 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:47:58.278153 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:47:58.287116 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:47:58.291127 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:47:58.302251 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:47:58.302486 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:47:58.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.309099 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:47:58.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.310507 systemd[1]: Stopped network-cleanup.service. Apr 12 18:47:58.314004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:47:58.314049 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:47:58.314119 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:47:58.314150 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:47:58.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.314184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:47:58.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.314223 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:47:58.314303 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:47:58.314340 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:47:58.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:58.314402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:47:58.314436 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:47:58.324747 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:47:58.333160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:47:58.333264 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:47:58.336562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:47:58.336662 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:47:58.338322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:47:58.338385 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:47:58.349914 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 12 18:47:58.350585 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:47:58.351353 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:47:58.361229 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:47:58.386574 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:47:58.421575 systemd[1]: Switching root. Apr 12 18:47:58.460022 systemd-journald[198]: Journal stopped Apr 12 18:48:04.289197 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Apr 12 18:48:04.289270 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:48:04.289318 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:48:04.289335 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:48:04.289355 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:48:04.289367 kernel: SELinux: policy capability open_perms=1 Apr 12 18:48:04.289379 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:48:04.289390 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:48:04.289406 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:48:04.289418 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:48:04.289429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:48:04.289441 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:48:04.289455 systemd[1]: Successfully loaded SELinux policy in 93.615ms. Apr 12 18:48:04.289477 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.028ms. Apr 12 18:48:04.289491 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:48:04.289504 systemd[1]: Detected virtualization kvm. Apr 12 18:48:04.289516 systemd[1]: Detected architecture x86-64. Apr 12 18:48:04.289529 systemd[1]: Detected first boot. Apr 12 18:48:04.289541 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:48:04.289554 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:48:04.289568 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:48:04.289581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:48:04.289598 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:48:04.289613 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:48:04.289629 kernel: kauditd_printk_skb: 48 callbacks suppressed Apr 12 18:48:04.289641 kernel: audit: type=1334 audit(1712947683.578:85): prog-id=12 op=LOAD Apr 12 18:48:04.289653 kernel: audit: type=1334 audit(1712947683.578:86): prog-id=3 op=UNLOAD Apr 12 18:48:04.289665 kernel: audit: type=1334 audit(1712947683.587:87): prog-id=13 op=LOAD Apr 12 18:48:04.289678 kernel: audit: type=1334 audit(1712947683.589:88): prog-id=14 op=LOAD Apr 12 18:48:04.289689 kernel: audit: type=1334 audit(1712947683.589:89): prog-id=4 op=UNLOAD Apr 12 18:48:04.289701 kernel: audit: type=1334 audit(1712947683.589:90): prog-id=5 op=UNLOAD Apr 12 18:48:04.289712 kernel: audit: type=1334 audit(1712947683.595:91): prog-id=15 op=LOAD Apr 12 18:48:04.289724 kernel: audit: type=1334 audit(1712947683.595:92): prog-id=12 op=UNLOAD Apr 12 18:48:04.289736 kernel: audit: type=1334 audit(1712947683.603:93): prog-id=16 op=LOAD Apr 12 18:48:04.289748 kernel: audit: type=1334 audit(1712947683.605:94): prog-id=17 op=LOAD Apr 12 18:48:04.289760 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:48:04.289772 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:48:04.289786 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:48:04.289799 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:48:04.289812 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:48:04.289825 systemd[1]: Created slice system-getty.slice. Apr 12 18:48:04.289838 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:48:04.289851 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:48:04.289863 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:48:04.289878 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:48:04.289890 systemd[1]: Created slice user.slice. Apr 12 18:48:04.289915 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:48:04.289929 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:48:04.289941 systemd[1]: Set up automount boot.automount. Apr 12 18:48:04.289954 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:48:04.289966 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:48:04.289978 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:48:04.289991 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:48:04.290003 systemd[1]: Reached target integritysetup.target. Apr 12 18:48:04.290018 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:48:04.290030 systemd[1]: Reached target remote-fs.target. Apr 12 18:48:04.290043 systemd[1]: Reached target slices.target. Apr 12 18:48:04.290056 systemd[1]: Reached target swap.target. Apr 12 18:48:04.290068 systemd[1]: Reached target torcx.target. Apr 12 18:48:04.290081 systemd[1]: Reached target veritysetup.target. Apr 12 18:48:04.290094 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:48:04.290106 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:48:04.290118 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:48:04.290132 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:48:04.290145 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:48:04.290158 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:48:04.290170 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:48:04.290183 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:48:04.290199 systemd[1]: Mounting media.mount... Apr 12 18:48:04.290212 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:48:04.290224 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:48:04.290237 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:48:04.290252 systemd[1]: Mounting tmp.mount... Apr 12 18:48:04.290264 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:48:04.290277 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:48:04.290301 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:48:04.290314 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:48:04.290327 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:48:04.290340 systemd[1]: Starting modprobe@drm.service... Apr 12 18:48:04.290353 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:48:04.290365 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:48:04.290380 systemd[1]: Starting modprobe@loop.service... Apr 12 18:48:04.290393 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:48:04.290406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:48:04.290418 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:48:04.290431 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:48:04.290444 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:48:04.290457 systemd[1]: Stopped systemd-journald.service. Apr 12 18:48:04.290469 systemd[1]: systemd-journald.service: Consumed 1.015s CPU time. Apr 12 18:48:04.290481 kernel: loop: module loaded Apr 12 18:48:04.290494 kernel: fuse: init (API version 7.34) Apr 12 18:48:04.290506 systemd[1]: Starting systemd-journald.service... Apr 12 18:48:04.290518 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:48:04.290531 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:48:04.290543 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:48:04.290557 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:48:04.290569 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:48:04.290582 systemd[1]: Stopped verity-setup.service. Apr 12 18:48:04.290595 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:48:04.290609 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:48:04.290621 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:48:04.290634 systemd[1]: Mounted media.mount. Apr 12 18:48:04.290648 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:48:04.290660 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:48:04.290673 systemd[1]: Mounted tmp.mount. Apr 12 18:48:04.290692 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:48:04.290713 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:48:04.290728 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:48:04.290742 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:48:04.290761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:48:04.290775 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:48:04.290790 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:48:04.290804 systemd[1]: Finished modprobe@drm.service. Apr 12 18:48:04.290821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:48:04.290836 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:48:04.290856 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:48:04.290873 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:48:04.290891 systemd-journald[1046]: Journal started Apr 12 18:48:04.290965 systemd-journald[1046]: Runtime Journal (/run/log/journal/79af1e19303f4ec292a5d7e386b8172f) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:47:58.638000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:47:59.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:47:59.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:47:59.217000 audit: BPF prog-id=10 op=LOAD Apr 12 18:47:59.217000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:47:59.217000 audit: BPF prog-id=11 op=LOAD Apr 12 18:47:59.217000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:47:59.367000 audit[957]: AVC avc: denied { associate } for pid=957 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:47:59.367000 audit[957]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=940 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:47:59.367000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:47:59.370000 audit[957]: AVC avc: denied { associate } for pid=957 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:47:59.370000 audit[957]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=940 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:47:59.370000 audit: CWD cwd="/" Apr 12 18:47:59.370000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:47:59.370000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:47:59.370000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:48:03.578000 audit: BPF prog-id=12 op=LOAD Apr 12 18:48:03.578000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:48:03.587000 audit: BPF prog-id=13 op=LOAD Apr 12 18:48:03.589000 audit: BPF prog-id=14 op=LOAD Apr 12 18:48:03.589000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:48:03.589000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:48:03.595000 audit: BPF prog-id=15 op=LOAD Apr 12 18:48:03.595000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:48:03.603000 audit: BPF prog-id=16 op=LOAD Apr 12 18:48:03.605000 audit: BPF prog-id=17 op=LOAD Apr 12 18:48:03.612000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:48:03.612000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:48:03.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:03.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:03.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:03.693000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:48:04.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.137000 audit: BPF prog-id=18 op=LOAD Apr 12 18:48:04.142000 audit: BPF prog-id=19 op=LOAD Apr 12 18:48:04.157000 audit: BPF prog-id=20 op=LOAD Apr 12 18:48:04.158000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:48:04.158000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:48:04.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.287000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:48:04.287000 audit[1046]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe5c4a37b0 a2=4000 a3=7ffe5c4a384c items=0 ppid=1 pid=1046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:48:04.287000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:48:03.566357 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:47:59.361449 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:48:03.566375 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:47:59.362418 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:48:03.613804 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:47:59.362448 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:48:03.614300 systemd[1]: systemd-journald.service: Consumed 1.015s CPU time. Apr 12 18:47:59.362497 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:47:59.362512 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:47:59.362562 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:47:59.362581 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:47:59.362889 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:47:59.362947 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:47:59.362968 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:47:59.367380 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:47:59.367440 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:48:04.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:47:59.367473 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:47:59.367495 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:47:59.367526 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:47:59.367546 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:47:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:48:02.884185 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:48:02.889394 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:48:02.889594 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:48:02.889843 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:48:02.889924 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:48:02.890021 /usr/lib/systemd/system-generators/torcx-generator[957]: time="2024-04-12T18:48:02Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:48:04.296477 systemd[1]: Started systemd-journald.service. Apr 12 18:48:04.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.297997 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:48:04.298187 systemd[1]: Finished modprobe@loop.service. Apr 12 18:48:04.299921 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:48:04.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.301620 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:48:04.303176 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:48:04.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.304573 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:48:04.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.306641 systemd[1]: Reached target network-pre.target. Apr 12 18:48:04.308929 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:48:04.315744 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:48:04.317650 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:48:04.321530 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:48:04.327765 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:48:04.330036 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:48:04.332127 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:48:04.343150 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:48:04.344727 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:48:04.356348 systemd-journald[1046]: Time spent on flushing to /var/log/journal/79af1e19303f4ec292a5d7e386b8172f is 36.091ms for 1165 entries. Apr 12 18:48:04.356348 systemd-journald[1046]: System Journal (/var/log/journal/79af1e19303f4ec292a5d7e386b8172f) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:48:04.443536 systemd-journald[1046]: Received client request to flush runtime journal. Apr 12 18:48:04.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.350047 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:48:04.354558 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:48:04.452494 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:48:04.366997 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:48:04.368612 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:48:04.383322 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:48:04.387042 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:48:04.406770 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:48:04.419740 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:48:04.431524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:48:04.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:04.455593 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:48:04.526249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:48:04.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:05.280386 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:48:05.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:05.284000 audit: BPF prog-id=21 op=LOAD Apr 12 18:48:05.288000 audit: BPF prog-id=22 op=LOAD Apr 12 18:48:05.289000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:48:05.290000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:48:05.300073 systemd[1]: Starting systemd-udevd.service... Apr 12 18:48:05.360127 systemd-udevd[1065]: Using default interface naming scheme 'v252'. Apr 12 18:48:05.449398 systemd[1]: Started systemd-udevd.service. Apr 12 18:48:05.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:05.471000 audit: BPF prog-id=23 op=LOAD Apr 12 18:48:05.473193 systemd[1]: Starting systemd-networkd.service... Apr 12 18:48:05.489000 audit: BPF prog-id=24 op=LOAD Apr 12 18:48:05.496000 audit: BPF prog-id=25 op=LOAD Apr 12 18:48:05.497000 audit: BPF prog-id=26 op=LOAD Apr 12 18:48:05.509198 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:48:05.596031 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:48:05.655323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:48:05.679031 systemd[1]: Started systemd-userdbd.service. Apr 12 18:48:05.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:05.682821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:48:05.697321 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:48:05.733000 audit[1068]: AVC avc: denied { confidentiality } for pid=1068 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:48:05.733000 audit[1068]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a0eed02410 a1=32194 a2=7f04755fbbc5 a3=5 items=108 ppid=1065 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:48:05.733000 audit: CWD cwd="/" Apr 12 18:48:05.733000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=1 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=2 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=3 name=(null) inode=15494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=4 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=5 name=(null) inode=15495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=6 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=7 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=8 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=9 name=(null) inode=15497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=10 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=11 name=(null) inode=15498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=12 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=13 name=(null) inode=15499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=14 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=15 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=16 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=17 name=(null) inode=15501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=18 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=19 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=20 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=21 name=(null) inode=15503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=22 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=23 name=(null) inode=15504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=24 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=25 name=(null) inode=15505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=26 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=27 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=28 name=(null) inode=15502 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=29 name=(null) inode=15507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=30 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=31 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=32 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=33 name=(null) inode=15509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=34 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=35 name=(null) inode=15510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=36 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=37 name=(null) inode=15511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=38 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=39 name=(null) inode=15512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=40 name=(null) inode=15508 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=41 name=(null) inode=15513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=42 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=43 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=44 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=45 name=(null) inode=15515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=46 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=47 name=(null) inode=15516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=48 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=49 name=(null) inode=15517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=50 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=51 name=(null) inode=15518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=52 name=(null) inode=15514 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=53 name=(null) inode=15519 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=55 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=56 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=57 name=(null) inode=15521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=58 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=59 name=(null) inode=15522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=60 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=61 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=62 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=63 name=(null) inode=15524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=64 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=65 name=(null) inode=15525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=66 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=67 name=(null) inode=15526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=68 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=69 name=(null) inode=15527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=70 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=71 name=(null) inode=15528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=72 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=73 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=74 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=75 name=(null) inode=15530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=76 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=77 name=(null) inode=15531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=78 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=79 name=(null) inode=15532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=80 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=81 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=82 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=83 name=(null) inode=15534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=84 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=85 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=86 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=87 name=(null) inode=15536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=88 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=89 name=(null) inode=15537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=90 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=91 name=(null) inode=15538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=92 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=93 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=94 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=95 name=(null) inode=15540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=96 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=97 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=98 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=99 name=(null) inode=15542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=100 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=101 name=(null) inode=15543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=102 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=103 name=(null) inode=15544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=104 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=105 name=(null) inode=15545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=106 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PATH item=107 name=(null) inode=15546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:48:05.733000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:48:05.807776 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 12 18:48:05.811319 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 12 18:48:05.834328 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:48:05.836859 systemd-networkd[1083]: lo: Link UP Apr 12 18:48:05.836885 systemd-networkd[1083]: lo: Gained carrier Apr 12 18:48:05.837416 systemd-networkd[1083]: Enumeration completed Apr 12 18:48:05.837545 systemd[1]: Started systemd-networkd.service. Apr 12 18:48:05.837711 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:48:05.841455 systemd-networkd[1083]: eth0: Link UP Apr 12 18:48:05.841473 systemd-networkd[1083]: eth0: Gained carrier Apr 12 18:48:05.890503 systemd-networkd[1083]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:48:05.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.017004 kernel: kvm: Nested Virtualization enabled Apr 12 18:48:06.017160 kernel: SVM: kvm: Nested Paging enabled Apr 12 18:48:06.017187 kernel: SVM: Virtual VMLOAD VMSAVE supported Apr 12 18:48:06.017905 kernel: SVM: Virtual GIF supported Apr 12 18:48:06.157969 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:48:06.199972 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:48:06.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.203162 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:48:06.230611 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:48:06.268972 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:48:06.270603 systemd[1]: Reached target cryptsetup.target. Apr 12 18:48:06.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.278114 systemd[1]: Starting lvm2-activation.service... Apr 12 18:48:06.294917 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:48:06.332846 systemd[1]: Finished lvm2-activation.service. Apr 12 18:48:06.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.334390 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:48:06.335671 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:48:06.335703 systemd[1]: Reached target local-fs.target. Apr 12 18:48:06.338140 systemd[1]: Reached target machines.target. Apr 12 18:48:06.344261 systemd[1]: Starting ldconfig.service... Apr 12 18:48:06.347844 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:48:06.347942 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:48:06.349727 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:48:06.358462 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:48:06.368504 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:48:06.371303 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:48:06.371370 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:48:06.374312 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:48:06.376246 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1103 (bootctl) Apr 12 18:48:06.379861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:48:06.401079 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:48:06.404767 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:48:06.407630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:48:06.407956 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:48:06.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:06.609278 systemd-fsck[1112]: fsck.fat 4.2 (2021-01-31) Apr 12 18:48:06.609278 systemd-fsck[1112]: /dev/vda1: 789 files, 119240/258078 clusters Apr 12 18:48:06.601962 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:48:06.609975 systemd[1]: Mounting boot.mount... Apr 12 18:48:07.582609 systemd[1]: Mounted boot.mount. Apr 12 18:48:07.615193 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:48:07.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:07.754176 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:48:07.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:07.761231 systemd[1]: Starting audit-rules.service... Apr 12 18:48:07.769750 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:48:07.777314 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:48:07.794000 audit: BPF prog-id=27 op=LOAD Apr 12 18:48:07.796259 systemd[1]: Starting systemd-resolved.service... Apr 12 18:48:07.798131 systemd-networkd[1083]: eth0: Gained IPv6LL Apr 12 18:48:07.804000 audit: BPF prog-id=28 op=LOAD Apr 12 18:48:07.807278 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:48:07.828572 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:48:07.830478 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:48:07.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:07.832225 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:48:07.835000 audit[1129]: SYSTEM_BOOT pid=1129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:48:07.850636 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:48:07.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:48:07.880000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:48:07.880000 audit[1135]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0ec5c380 a2=420 a3=0 items=0 ppid=1115 pid=1135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:48:07.880000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:48:07.881986 augenrules[1135]: No rules Apr 12 18:48:07.883235 systemd[1]: Finished audit-rules.service. Apr 12 18:48:07.884772 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:48:07.953230 systemd-resolved[1124]: Positive Trust Anchors: Apr 12 18:48:07.953250 systemd-resolved[1124]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:48:07.953285 systemd-resolved[1124]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:48:07.955221 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:48:07.970797 systemd[1]: Reached target time-set.target. Apr 12 18:48:09.059313 systemd-timesyncd[1127]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:48:09.059424 systemd-timesyncd[1127]: Initial clock synchronization to Fri 2024-04-12 18:48:09.059091 UTC. Apr 12 18:48:09.107277 systemd-resolved[1124]: Defaulting to hostname 'linux'. Apr 12 18:48:09.112650 systemd[1]: Started systemd-resolved.service. Apr 12 18:48:09.116187 systemd[1]: Reached target network.target. Apr 12 18:48:09.121625 systemd[1]: Reached target nss-lookup.target. Apr 12 18:48:09.258561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:48:09.276361 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:48:09.305752 ldconfig[1102]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:48:09.341163 systemd[1]: Finished ldconfig.service. Apr 12 18:48:09.363695 systemd[1]: Starting systemd-update-done.service... Apr 12 18:48:09.385387 systemd[1]: Finished systemd-update-done.service. Apr 12 18:48:09.386945 systemd[1]: Reached target sysinit.target. Apr 12 18:48:09.388183 systemd[1]: Started motdgen.path. Apr 12 18:48:09.389480 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:48:09.391475 systemd[1]: Started logrotate.timer. Apr 12 18:48:09.392771 systemd[1]: Started mdadm.timer. Apr 12 18:48:09.393807 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:48:09.395153 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:48:09.395203 systemd[1]: Reached target paths.target. Apr 12 18:48:09.398163 systemd[1]: Reached target timers.target. Apr 12 18:48:09.408071 systemd[1]: Listening on dbus.socket. Apr 12 18:48:09.410957 systemd[1]: Starting docker.socket... Apr 12 18:48:09.422913 systemd[1]: Listening on sshd.socket. Apr 12 18:48:09.427306 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:48:09.430673 systemd[1]: Listening on docker.socket. Apr 12 18:48:09.439350 systemd[1]: Reached target sockets.target. Apr 12 18:48:09.441993 systemd[1]: Reached target basic.target. Apr 12 18:48:09.445124 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:48:09.445168 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:48:09.455064 systemd[1]: Starting containerd.service... Apr 12 18:48:09.458586 systemd[1]: Starting dbus.service... Apr 12 18:48:09.478402 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:48:09.502741 systemd[1]: Starting extend-filesystems.service... Apr 12 18:48:09.505991 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:48:09.517487 jq[1147]: false Apr 12 18:48:09.521110 systemd[1]: Starting motdgen.service... Apr 12 18:48:09.543570 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:48:09.552633 systemd[1]: Starting prepare-critools.service... Apr 12 18:48:09.555283 systemd[1]: Starting prepare-helm.service... Apr 12 18:48:09.574327 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:48:09.577685 extend-filesystems[1148]: Found sr0 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda1 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda2 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda3 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found usr Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda4 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda6 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda7 Apr 12 18:48:09.578889 extend-filesystems[1148]: Found vda9 Apr 12 18:48:09.578889 extend-filesystems[1148]: Checking size of /dev/vda9 Apr 12 18:48:09.624564 extend-filesystems[1148]: Resized partition /dev/vda9 Apr 12 18:48:09.612153 dbus-daemon[1146]: [system] SELinux support is enabled Apr 12 18:48:09.604823 systemd[1]: Starting sshd-keygen.service... Apr 12 18:48:09.643800 extend-filesystems[1170]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:48:09.705441 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:48:09.769054 systemd[1]: Starting systemd-logind.service... Apr 12 18:48:09.778435 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:48:09.778539 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:48:09.779249 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:48:09.790065 jq[1174]: true Apr 12 18:48:09.780450 systemd[1]: Starting update-engine.service... Apr 12 18:48:09.783608 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:48:09.796032 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:48:09.796976 systemd[1]: Started dbus.service. Apr 12 18:48:09.807983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:48:09.808225 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:48:09.808667 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:48:09.808910 systemd[1]: Finished motdgen.service. Apr 12 18:48:09.826479 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:48:09.875668 jq[1179]: true Apr 12 18:48:09.826702 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:48:09.842287 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:48:09.842327 systemd[1]: Reached target system-config.target. Apr 12 18:48:09.843676 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:48:09.843700 systemd[1]: Reached target user-config.target. Apr 12 18:48:09.884088 tar[1176]: ./ Apr 12 18:48:09.884088 tar[1176]: ./loopback Apr 12 18:48:09.886188 extend-filesystems[1170]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:48:09.886188 extend-filesystems[1170]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:48:09.886188 extend-filesystems[1170]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:48:09.945523 extend-filesystems[1148]: Resized filesystem in /dev/vda9 Apr 12 18:48:09.904120 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:48:09.946775 tar[1177]: crictl Apr 12 18:48:09.905680 systemd[1]: Finished extend-filesystems.service. Apr 12 18:48:09.947273 tar[1178]: linux-amd64/helm Apr 12 18:48:10.002544 tar[1176]: ./bandwidth Apr 12 18:48:10.016691 systemd-logind[1172]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:48:10.016728 systemd-logind[1172]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:48:10.021207 systemd-logind[1172]: New seat seat0. Apr 12 18:48:10.022634 update_engine[1173]: I0412 18:48:10.022305 1173 main.cc:92] Flatcar Update Engine starting Apr 12 18:48:10.031207 update_engine[1173]: I0412 18:48:10.031142 1173 update_check_scheduler.cc:74] Next update check in 11m44s Apr 12 18:48:10.031239 systemd[1]: Started systemd-logind.service. Apr 12 18:48:10.032961 systemd[1]: Started update-engine.service. Apr 12 18:48:10.041715 systemd[1]: Started locksmithd.service. Apr 12 18:48:10.054517 env[1180]: time="2024-04-12T18:48:10.054171060Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:48:10.062791 bash[1205]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:48:10.066325 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:48:10.068825 tar[1176]: ./ptp Apr 12 18:48:10.100486 env[1180]: time="2024-04-12T18:48:10.100393330Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:48:10.100725 env[1180]: time="2024-04-12T18:48:10.100641696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.105794 env[1180]: time="2024-04-12T18:48:10.105707909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:48:10.105794 env[1180]: time="2024-04-12T18:48:10.105775305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106148 env[1180]: time="2024-04-12T18:48:10.106112287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106148 env[1180]: time="2024-04-12T18:48:10.106141362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106272 env[1180]: time="2024-04-12T18:48:10.106158484Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:48:10.106272 env[1180]: time="2024-04-12T18:48:10.106171839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106272 env[1180]: time="2024-04-12T18:48:10.106261677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106573 env[1180]: time="2024-04-12T18:48:10.106540771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106724 env[1180]: time="2024-04-12T18:48:10.106692425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:48:10.106724 env[1180]: time="2024-04-12T18:48:10.106720127Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:48:10.106801 env[1180]: time="2024-04-12T18:48:10.106777986Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:48:10.106801 env[1180]: time="2024-04-12T18:48:10.106793986Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:48:10.158440 tar[1176]: ./vlan Apr 12 18:48:10.176734 env[1180]: time="2024-04-12T18:48:10.176648984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:48:10.176734 env[1180]: time="2024-04-12T18:48:10.176718294Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:48:10.176734 env[1180]: time="2024-04-12T18:48:10.176734454Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176774349Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176793064Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176812511Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176827018Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176842737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176879186Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176895526Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176921405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.177005 env[1180]: time="2024-04-12T18:48:10.176937836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177123243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177203414Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177481275Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177509077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177526470Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177575492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177590670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177607301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177621228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177636596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177651554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177665972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177679998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177697731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:48:10.188422 env[1180]: time="2024-04-12T18:48:10.177819790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.182747 systemd[1]: Started containerd.service. Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177840008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177881115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177894901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177912013Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177924526Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177946448Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:48:10.189633 env[1180]: time="2024-04-12T18:48:10.177984840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.178202678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.178263342Z" level=info msg="Connect containerd service" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.178302275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.178941714Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182501502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182564029Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182718228Z" level=info msg="Start subscribing containerd event" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182829747Z" level=info msg="Start recovering state" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182961404Z" level=info msg="Start event monitor" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.182986862Z" level=info msg="Start snapshots syncer" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.183002571Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:48:10.190173 env[1180]: time="2024-04-12T18:48:10.183014574Z" level=info msg="Start streaming server" Apr 12 18:48:10.221036 env[1180]: time="2024-04-12T18:48:10.217479300Z" level=info msg="containerd successfully booted in 0.164357s" Apr 12 18:48:10.231252 tar[1176]: ./host-device Apr 12 18:48:10.286528 tar[1176]: ./tuning Apr 12 18:48:10.342590 tar[1176]: ./vrf Apr 12 18:48:10.435747 tar[1176]: ./sbr Apr 12 18:48:10.512479 tar[1176]: ./tap Apr 12 18:48:10.621035 tar[1176]: ./dhcp Apr 12 18:48:11.313327 tar[1176]: ./static Apr 12 18:48:11.450566 tar[1176]: ./firewall Apr 12 18:48:11.497200 sshd_keygen[1169]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:48:11.515214 tar[1176]: ./macvlan Apr 12 18:48:11.585140 systemd[1]: Finished prepare-critools.service. Apr 12 18:48:11.624392 tar[1176]: ./dummy Apr 12 18:48:11.632118 locksmithd[1206]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:48:11.639540 systemd[1]: Finished sshd-keygen.service. Apr 12 18:48:11.642895 systemd[1]: Starting issuegen.service... Apr 12 18:48:11.651682 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:48:11.651936 systemd[1]: Finished issuegen.service. Apr 12 18:48:11.655423 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:48:11.672956 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:48:11.673167 tar[1176]: ./bridge Apr 12 18:48:11.676535 systemd[1]: Started getty@tty1.service. Apr 12 18:48:11.683067 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:48:11.684578 systemd[1]: Reached target getty.target. Apr 12 18:48:11.722977 tar[1176]: ./ipvlan Apr 12 18:48:11.732875 tar[1178]: linux-amd64/LICENSE Apr 12 18:48:11.732875 tar[1178]: linux-amd64/README.md Apr 12 18:48:11.744657 systemd[1]: Finished prepare-helm.service. Apr 12 18:48:11.781896 tar[1176]: ./portmap Apr 12 18:48:12.075659 tar[1176]: ./host-local Apr 12 18:48:12.268634 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:48:12.270386 systemd[1]: Reached target multi-user.target. Apr 12 18:48:12.277735 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:48:12.291103 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:48:12.291337 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:48:12.294682 systemd[1]: Startup finished in 1.498s (kernel) + 10.184s (initrd) + 12.679s (userspace) = 24.362s. Apr 12 18:48:17.057826 systemd[1]: Created slice system-sshd.slice. Apr 12 18:48:17.066829 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:55144.service. Apr 12 18:48:17.150653 sshd[1235]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:48:17.157106 sshd[1235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:17.194699 systemd[1]: Created slice user-500.slice. Apr 12 18:48:17.199750 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:48:17.201518 systemd-logind[1172]: New session 1 of user core. Apr 12 18:48:17.239432 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:48:17.257246 systemd[1]: Starting user@500.service... Apr 12 18:48:17.265261 (systemd)[1238]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:17.407972 systemd[1238]: Queued start job for default target default.target. Apr 12 18:48:17.408695 systemd[1238]: Reached target paths.target. Apr 12 18:48:17.408720 systemd[1238]: Reached target sockets.target. Apr 12 18:48:17.408737 systemd[1238]: Reached target timers.target. Apr 12 18:48:17.408752 systemd[1238]: Reached target basic.target. Apr 12 18:48:17.408818 systemd[1238]: Reached target default.target. Apr 12 18:48:17.408849 systemd[1238]: Startup finished in 135ms. Apr 12 18:48:17.409089 systemd[1]: Started user@500.service. Apr 12 18:48:17.410532 systemd[1]: Started session-1.scope. Apr 12 18:48:17.488117 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:55158.service. Apr 12 18:48:17.592601 sshd[1247]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:48:17.595811 sshd[1247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:17.614193 systemd[1]: Started session-2.scope. Apr 12 18:48:17.618610 systemd-logind[1172]: New session 2 of user core. Apr 12 18:48:17.701138 sshd[1247]: pam_unix(sshd:session): session closed for user core Apr 12 18:48:17.706035 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:55170.service. Apr 12 18:48:17.706740 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:55158.service: Deactivated successfully. Apr 12 18:48:17.707619 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:48:17.708798 systemd-logind[1172]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:48:17.710082 systemd-logind[1172]: Removed session 2. Apr 12 18:48:17.749087 sshd[1252]: Accepted publickey for core from 10.0.0.1 port 55170 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:48:17.751018 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:17.756059 systemd-logind[1172]: New session 3 of user core. Apr 12 18:48:17.757261 systemd[1]: Started session-3.scope. Apr 12 18:48:17.822901 sshd[1252]: pam_unix(sshd:session): session closed for user core Apr 12 18:48:17.836678 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:55178.service. Apr 12 18:48:17.837445 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:55170.service: Deactivated successfully. Apr 12 18:48:17.838677 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:48:17.844274 systemd-logind[1172]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:48:17.845731 systemd-logind[1172]: Removed session 3. Apr 12 18:48:17.890694 sshd[1258]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:48:17.892706 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:17.908386 systemd-logind[1172]: New session 4 of user core. Apr 12 18:48:17.911392 systemd[1]: Started session-4.scope. Apr 12 18:48:17.976982 sshd[1258]: pam_unix(sshd:session): session closed for user core Apr 12 18:48:17.986660 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:55184.service. Apr 12 18:48:17.987570 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:55178.service: Deactivated successfully. Apr 12 18:48:17.993227 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:48:17.994302 systemd-logind[1172]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:48:17.998487 systemd-logind[1172]: Removed session 4. Apr 12 18:48:18.061990 sshd[1264]: Accepted publickey for core from 10.0.0.1 port 55184 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:48:18.065576 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:48:18.081354 systemd-logind[1172]: New session 5 of user core. Apr 12 18:48:18.082953 systemd[1]: Started session-5.scope. Apr 12 18:48:18.172564 sudo[1268]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:48:18.172890 sudo[1268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:48:18.853273 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:48:18.862397 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:48:18.862824 systemd[1]: Reached target network-online.target. Apr 12 18:48:18.864730 systemd[1]: Starting docker.service... Apr 12 18:48:18.934144 env[1286]: time="2024-04-12T18:48:18.934012395Z" level=info msg="Starting up" Apr 12 18:48:18.936837 env[1286]: time="2024-04-12T18:48:18.936751082Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:48:18.936837 env[1286]: time="2024-04-12T18:48:18.936796407Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:48:18.936837 env[1286]: time="2024-04-12T18:48:18.936826934Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:48:18.937076 env[1286]: time="2024-04-12T18:48:18.936844246Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:48:18.942252 env[1286]: time="2024-04-12T18:48:18.939000631Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:48:18.942252 env[1286]: time="2024-04-12T18:48:18.939034404Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:48:18.942252 env[1286]: time="2024-04-12T18:48:18.939060133Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:48:18.942252 env[1286]: time="2024-04-12T18:48:18.939074630Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:48:19.019079 env[1286]: time="2024-04-12T18:48:19.018999777Z" level=info msg="Loading containers: start." Apr 12 18:48:19.307961 kernel: Initializing XFRM netlink socket Apr 12 18:48:19.383640 env[1286]: time="2024-04-12T18:48:19.383521074Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:48:19.566054 systemd-networkd[1083]: docker0: Link UP Apr 12 18:48:19.603609 env[1286]: time="2024-04-12T18:48:19.603531551Z" level=info msg="Loading containers: done." Apr 12 18:48:19.627522 env[1286]: time="2024-04-12T18:48:19.626763588Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:48:19.627522 env[1286]: time="2024-04-12T18:48:19.627059633Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:48:19.627522 env[1286]: time="2024-04-12T18:48:19.627199125Z" level=info msg="Daemon has completed initialization" Apr 12 18:48:19.688450 systemd[1]: Started docker.service. Apr 12 18:48:19.694844 env[1286]: time="2024-04-12T18:48:19.694760701Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:48:19.723218 systemd[1]: Reloading. Apr 12 18:48:19.870910 /usr/lib/systemd/system-generators/torcx-generator[1429]: time="2024-04-12T18:48:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:48:19.870969 /usr/lib/systemd/system-generators/torcx-generator[1429]: time="2024-04-12T18:48:19Z" level=info msg="torcx already run" Apr 12 18:48:20.003634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:48:20.003667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:48:20.035308 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:48:20.230359 systemd[1]: Started kubelet.service. Apr 12 18:48:20.379622 kubelet[1470]: E0412 18:48:20.378704 1470 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:48:20.382409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:48:20.382585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:48:20.703769 env[1180]: time="2024-04-12T18:48:20.703592982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:48:21.687665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908888291.mount: Deactivated successfully. Apr 12 18:48:24.839020 env[1180]: time="2024-04-12T18:48:24.837720844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:24.841783 env[1180]: time="2024-04-12T18:48:24.841726748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:24.855609 env[1180]: time="2024-04-12T18:48:24.855513368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:24.859991 env[1180]: time="2024-04-12T18:48:24.859829213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:24.867182 env[1180]: time="2024-04-12T18:48:24.867079613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533\"" Apr 12 18:48:24.914931 env[1180]: time="2024-04-12T18:48:24.914878840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:48:29.822771 env[1180]: time="2024-04-12T18:48:29.822568814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:29.826565 env[1180]: time="2024-04-12T18:48:29.826482144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:29.835360 env[1180]: time="2024-04-12T18:48:29.835254789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:29.840791 env[1180]: time="2024-04-12T18:48:29.840701596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:29.841778 env[1180]: time="2024-04-12T18:48:29.841721579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3\"" Apr 12 18:48:29.968635 env[1180]: time="2024-04-12T18:48:29.968556034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:48:30.633741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:48:30.634073 systemd[1]: Stopped kubelet.service. Apr 12 18:48:30.638256 systemd[1]: Started kubelet.service. Apr 12 18:48:31.018784 kubelet[1509]: E0412 18:48:31.018410 1509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:48:31.030265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:48:31.030497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:48:33.180661 env[1180]: time="2024-04-12T18:48:33.180518047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:33.186097 env[1180]: time="2024-04-12T18:48:33.185997455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:33.197981 env[1180]: time="2024-04-12T18:48:33.197408479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:33.208457 env[1180]: time="2024-04-12T18:48:33.208272947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:33.212243 env[1180]: time="2024-04-12T18:48:33.212160239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b\"" Apr 12 18:48:33.284590 env[1180]: time="2024-04-12T18:48:33.284528331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:48:35.668207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294608479.mount: Deactivated successfully. Apr 12 18:48:37.547976 env[1180]: time="2024-04-12T18:48:37.543599230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:37.547976 env[1180]: time="2024-04-12T18:48:37.547268533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:37.552806 env[1180]: time="2024-04-12T18:48:37.551968729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:37.558812 env[1180]: time="2024-04-12T18:48:37.556122210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:37.558812 env[1180]: time="2024-04-12T18:48:37.556610606Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392\"" Apr 12 18:48:37.599171 env[1180]: time="2024-04-12T18:48:37.599118699Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:48:38.405521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398057783.mount: Deactivated successfully. Apr 12 18:48:41.281648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:48:41.281938 systemd[1]: Stopped kubelet.service. Apr 12 18:48:41.283868 systemd[1]: Started kubelet.service. Apr 12 18:48:41.407987 kubelet[1530]: E0412 18:48:41.407907 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:48:41.414613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:48:41.414814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:48:41.858892 env[1180]: time="2024-04-12T18:48:41.857597528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:41.876576 env[1180]: time="2024-04-12T18:48:41.875656562Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:41.878567 env[1180]: time="2024-04-12T18:48:41.878469618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:41.885216 env[1180]: time="2024-04-12T18:48:41.885117317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:41.892105 env[1180]: time="2024-04-12T18:48:41.887780523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 12 18:48:41.908491 env[1180]: time="2024-04-12T18:48:41.908435996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:48:42.470771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454183158.mount: Deactivated successfully. Apr 12 18:48:42.496966 env[1180]: time="2024-04-12T18:48:42.495922232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:42.502316 env[1180]: time="2024-04-12T18:48:42.502210153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:42.505047 env[1180]: time="2024-04-12T18:48:42.504913123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:42.508378 env[1180]: time="2024-04-12T18:48:42.508193303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:42.508610 env[1180]: time="2024-04-12T18:48:42.508482428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:48:42.535259 env[1180]: time="2024-04-12T18:48:42.535205322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:48:43.380195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167222941.mount: Deactivated successfully. Apr 12 18:48:49.299913 env[1180]: time="2024-04-12T18:48:49.297987258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:49.306268 env[1180]: time="2024-04-12T18:48:49.301951542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:49.320497 env[1180]: time="2024-04-12T18:48:49.318078725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:49.321970 env[1180]: time="2024-04-12T18:48:49.321929517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Apr 12 18:48:49.325834 env[1180]: time="2024-04-12T18:48:49.323361167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:51.571263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 12 18:48:51.571573 systemd[1]: Stopped kubelet.service. Apr 12 18:48:51.573998 systemd[1]: Started kubelet.service. Apr 12 18:48:51.653989 kubelet[1621]: E0412 18:48:51.653892 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:48:51.665248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:48:51.665463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:48:53.099977 systemd[1]: Stopped kubelet.service. Apr 12 18:48:53.149285 systemd[1]: Reloading. Apr 12 18:48:53.240009 /usr/lib/systemd/system-generators/torcx-generator[1653]: time="2024-04-12T18:48:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:48:53.240432 /usr/lib/systemd/system-generators/torcx-generator[1653]: time="2024-04-12T18:48:53Z" level=info msg="torcx already run" Apr 12 18:48:53.363477 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:48:53.363503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:48:53.396113 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:48:53.611025 systemd[1]: Started kubelet.service. Apr 12 18:48:53.780506 kubelet[1695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:48:53.780506 kubelet[1695]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:48:53.780506 kubelet[1695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:48:53.781138 kubelet[1695]: I0412 18:48:53.780523 1695 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:48:54.091198 kubelet[1695]: I0412 18:48:54.090905 1695 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:48:54.091198 kubelet[1695]: I0412 18:48:54.090959 1695 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:48:54.091470 kubelet[1695]: I0412 18:48:54.091302 1695 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:48:54.099112 kubelet[1695]: E0412 18:48:54.099062 1695 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.102812 kubelet[1695]: I0412 18:48:54.102705 1695 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:48:54.151614 kubelet[1695]: I0412 18:48:54.149679 1695 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:48:54.151614 kubelet[1695]: I0412 18:48:54.150900 1695 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:48:54.154737 kubelet[1695]: I0412 18:48:54.153347 1695 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.157912 1695 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.162663 1695 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.162935 1695 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.163111 1695 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.163133 1695 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.163171 1695 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:48:54.164489 kubelet[1695]: I0412 18:48:54.163193 1695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:48:54.164913 kubelet[1695]: I0412 18:48:54.164900 1695 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:48:54.165260 kubelet[1695]: I0412 18:48:54.165225 1695 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:48:54.165540 kubelet[1695]: W0412 18:48:54.165524 1695 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:48:54.166805 kubelet[1695]: W0412 18:48:54.166748 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.166967 kubelet[1695]: I0412 18:48:54.166939 1695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:48:54.167149 kubelet[1695]: W0412 18:48:54.167109 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.167259 kubelet[1695]: E0412 18:48:54.167240 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.167355 kubelet[1695]: E0412 18:48:54.166941 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.167447 kubelet[1695]: I0412 18:48:54.167277 1695 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:48:54.167566 kubelet[1695]: I0412 18:48:54.166892 1695 server.go:1256] "Started kubelet" Apr 12 18:48:54.168027 kubelet[1695]: E0412 18:48:54.167986 1695 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17c59ce37dd387e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:48:54.166865895 +0000 UTC m=+0.584020980,LastTimestamp:2024-04-12 18:48:54.166865895 +0000 UTC m=+0.584020980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:48:54.169227 kubelet[1695]: I0412 18:48:54.169161 1695 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:48:54.178092 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:48:54.178245 kubelet[1695]: I0412 18:48:54.170177 1695 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:48:54.178245 kubelet[1695]: I0412 18:48:54.173755 1695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:48:54.190218 kubelet[1695]: I0412 18:48:54.190162 1695 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:48:54.194414 kubelet[1695]: I0412 18:48:54.194384 1695 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:48:54.194843 kubelet[1695]: I0412 18:48:54.194812 1695 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:48:54.196098 kubelet[1695]: I0412 18:48:54.196054 1695 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:48:54.196468 kubelet[1695]: W0412 18:48:54.196391 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.197149 kubelet[1695]: E0412 18:48:54.194953 1695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" Apr 12 18:48:54.197517 kubelet[1695]: I0412 18:48:54.194982 1695 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:48:54.197618 kubelet[1695]: E0412 18:48:54.197187 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.202081 kubelet[1695]: E0412 18:48:54.201717 1695 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:48:54.202261 kubelet[1695]: I0412 18:48:54.202151 1695 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:48:54.234779 kubelet[1695]: I0412 18:48:54.234691 1695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:48:54.244952 kubelet[1695]: I0412 18:48:54.243120 1695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:48:54.244952 kubelet[1695]: I0412 18:48:54.243450 1695 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:48:54.244952 kubelet[1695]: I0412 18:48:54.243489 1695 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:48:54.244952 kubelet[1695]: E0412 18:48:54.243646 1695 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:48:54.245242 kubelet[1695]: W0412 18:48:54.244918 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.245242 kubelet[1695]: E0412 18:48:54.245087 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:54.255646 kubelet[1695]: I0412 18:48:54.255603 1695 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:48:54.255936 kubelet[1695]: I0412 18:48:54.255916 1695 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:48:54.256050 kubelet[1695]: I0412 18:48:54.256032 1695 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:48:54.268148 kubelet[1695]: I0412 18:48:54.268073 1695 policy_none.go:49] "None policy: Start" Apr 12 18:48:54.272304 kubelet[1695]: I0412 18:48:54.272239 1695 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:48:54.272304 kubelet[1695]: I0412 18:48:54.272300 1695 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:48:54.295774 kubelet[1695]: I0412 18:48:54.295735 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:48:54.296524 kubelet[1695]: E0412 18:48:54.296503 1695 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Apr 12 18:48:54.298778 systemd[1]: Created slice kubepods.slice. Apr 12 18:48:54.306549 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:48:54.314837 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:48:54.327470 kubelet[1695]: I0412 18:48:54.327409 1695 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:48:54.328141 kubelet[1695]: I0412 18:48:54.328122 1695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:48:54.336124 kubelet[1695]: E0412 18:48:54.333037 1695 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:48:54.344636 kubelet[1695]: I0412 18:48:54.344443 1695 topology_manager.go:215] "Topology Admit Handler" podUID="bc6258e9cc399f25cc9b7122b4e6b7d3" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:48:54.346095 kubelet[1695]: I0412 18:48:54.346052 1695 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:48:54.347152 kubelet[1695]: I0412 18:48:54.347121 1695 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:48:54.359972 systemd[1]: Created slice kubepods-burstable-podbc6258e9cc399f25cc9b7122b4e6b7d3.slice. Apr 12 18:48:54.390573 systemd[1]: Created slice kubepods-burstable-podf4e8212a5db7e0401319814fa9ad65c9.slice. Apr 12 18:48:54.399663 kubelet[1695]: E0412 18:48:54.399592 1695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" Apr 12 18:48:54.405325 systemd[1]: Created slice kubepods-burstable-pod5d5c5aff921df216fcba2c51c322ceb1.slice. Apr 12 18:48:54.498880 kubelet[1695]: I0412 18:48:54.498776 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:48:54.499346 kubelet[1695]: I0412 18:48:54.499312 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:48:54.499416 kubelet[1695]: I0412 18:48:54.499367 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:48:54.499416 kubelet[1695]: I0412 18:48:54.499402 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:48:54.499497 kubelet[1695]: I0412 18:48:54.499433 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:48:54.499497 kubelet[1695]: I0412 18:48:54.499465 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:48:54.499497 kubelet[1695]: I0412 18:48:54.499495 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:48:54.499601 kubelet[1695]: I0412 18:48:54.499528 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:48:54.499601 kubelet[1695]: I0412 18:48:54.499560 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:48:54.499601 kubelet[1695]: I0412 18:48:54.499589 1695 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:48:54.499778 kubelet[1695]: E0412 18:48:54.499742 1695 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Apr 12 18:48:54.686334 kubelet[1695]: E0412 18:48:54.686129 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:54.687109 env[1180]: time="2024-04-12T18:48:54.687053760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc6258e9cc399f25cc9b7122b4e6b7d3,Namespace:kube-system,Attempt:0,}" Apr 12 18:48:54.702716 kubelet[1695]: E0412 18:48:54.702638 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:54.703500 env[1180]: time="2024-04-12T18:48:54.703425807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,}" Apr 12 18:48:54.709180 kubelet[1695]: E0412 18:48:54.709107 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:54.710148 env[1180]: time="2024-04-12T18:48:54.709747910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,}" Apr 12 18:48:54.800482 kubelet[1695]: E0412 18:48:54.800394 1695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" Apr 12 18:48:54.903799 kubelet[1695]: I0412 18:48:54.902238 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:48:54.909940 kubelet[1695]: E0412 18:48:54.909843 1695 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Apr 12 18:48:54.932898 update_engine[1173]: I0412 18:48:54.932708 1173 update_attempter.cc:509] Updating boot flags... Apr 12 18:48:55.146116 kubelet[1695]: W0412 18:48:55.146043 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.146116 kubelet[1695]: E0412 18:48:55.146098 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.228788 kubelet[1695]: W0412 18:48:55.225257 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.228788 kubelet[1695]: E0412 18:48:55.225364 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.323692 kubelet[1695]: W0412 18:48:55.323576 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.323692 kubelet[1695]: E0412 18:48:55.323673 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.492946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395228843.mount: Deactivated successfully. Apr 12 18:48:55.530653 env[1180]: time="2024-04-12T18:48:55.530528380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.541964 env[1180]: time="2024-04-12T18:48:55.541842012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.549531 env[1180]: time="2024-04-12T18:48:55.549378329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.552501 env[1180]: time="2024-04-12T18:48:55.551539013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.557765 env[1180]: time="2024-04-12T18:48:55.554016790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.564719 env[1180]: time="2024-04-12T18:48:55.564611592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.568360 env[1180]: time="2024-04-12T18:48:55.568239778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.574486 env[1180]: time="2024-04-12T18:48:55.574342704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.575496 env[1180]: time="2024-04-12T18:48:55.575288924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.576993 env[1180]: time="2024-04-12T18:48:55.576054197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.580729 env[1180]: time="2024-04-12T18:48:55.579896309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.595624 env[1180]: time="2024-04-12T18:48:55.595466732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:48:55.602396 kubelet[1695]: E0412 18:48:55.602337 1695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="1.6s" Apr 12 18:48:55.626977 kubelet[1695]: W0412 18:48:55.626723 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.626977 kubelet[1695]: E0412 18:48:55.626890 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:55.712599 kubelet[1695]: I0412 18:48:55.712210 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:48:55.713613 kubelet[1695]: E0412 18:48:55.713572 1695 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Apr 12 18:48:55.739396 env[1180]: time="2024-04-12T18:48:55.737087702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:48:55.739396 env[1180]: time="2024-04-12T18:48:55.737262158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:48:55.739396 env[1180]: time="2024-04-12T18:48:55.737281423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:48:55.745405 env[1180]: time="2024-04-12T18:48:55.740531064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66d37526e949d36d71b87ebf9bb50323eb8c93174bfeac61ca9d36d2d907fb4c pid=1750 runtime=io.containerd.runc.v2 Apr 12 18:48:55.946635 systemd[1]: Started cri-containerd-66d37526e949d36d71b87ebf9bb50323eb8c93174bfeac61ca9d36d2d907fb4c.scope. Apr 12 18:48:55.992728 env[1180]: time="2024-04-12T18:48:55.992218192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:48:55.992728 env[1180]: time="2024-04-12T18:48:55.992373253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:48:55.992728 env[1180]: time="2024-04-12T18:48:55.992401915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:48:55.998512 env[1180]: time="2024-04-12T18:48:55.993963977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:48:55.998512 env[1180]: time="2024-04-12T18:48:55.994020609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:48:55.998512 env[1180]: time="2024-04-12T18:48:55.994036258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:48:55.998512 env[1180]: time="2024-04-12T18:48:55.994260222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f4589bfc36897e98acd86e923e26f5433efe775f5bc7e1e4f6171c940142ae6 pid=1777 runtime=io.containerd.runc.v2 Apr 12 18:48:56.018917 env[1180]: time="2024-04-12T18:48:55.998228742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d024fd545377e42d08541f367de4471c4834a78c1e0059529c17cb7810287e pid=1785 runtime=io.containerd.runc.v2 Apr 12 18:48:56.039326 systemd[1]: Started cri-containerd-2f4589bfc36897e98acd86e923e26f5433efe775f5bc7e1e4f6171c940142ae6.scope. Apr 12 18:48:56.076054 systemd[1]: Started cri-containerd-b7d024fd545377e42d08541f367de4471c4834a78c1e0059529c17cb7810287e.scope. Apr 12 18:48:56.177736 kubelet[1695]: E0412 18:48:56.177650 1695 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:56.248802 env[1180]: time="2024-04-12T18:48:56.246885116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc6258e9cc399f25cc9b7122b4e6b7d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d37526e949d36d71b87ebf9bb50323eb8c93174bfeac61ca9d36d2d907fb4c\"" Apr 12 18:48:56.248993 kubelet[1695]: E0412 18:48:56.248030 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:56.258237 env[1180]: time="2024-04-12T18:48:56.258161228Z" level=info msg="CreateContainer within sandbox \"66d37526e949d36d71b87ebf9bb50323eb8c93174bfeac61ca9d36d2d907fb4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:48:56.276730 env[1180]: time="2024-04-12T18:48:56.274454937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5d5c5aff921df216fcba2c51c322ceb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f4589bfc36897e98acd86e923e26f5433efe775f5bc7e1e4f6171c940142ae6\"" Apr 12 18:48:56.277629 kubelet[1695]: E0412 18:48:56.276194 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:56.281175 env[1180]: time="2024-04-12T18:48:56.281124166Z" level=info msg="CreateContainer within sandbox \"2f4589bfc36897e98acd86e923e26f5433efe775f5bc7e1e4f6171c940142ae6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:48:56.489569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031874648.mount: Deactivated successfully. Apr 12 18:48:56.490642 env[1180]: time="2024-04-12T18:48:56.489567510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f4e8212a5db7e0401319814fa9ad65c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d024fd545377e42d08541f367de4471c4834a78c1e0059529c17cb7810287e\"" Apr 12 18:48:56.489703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878742465.mount: Deactivated successfully. Apr 12 18:48:56.492158 kubelet[1695]: E0412 18:48:56.492114 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:56.494822 env[1180]: time="2024-04-12T18:48:56.494771956Z" level=info msg="CreateContainer within sandbox \"b7d024fd545377e42d08541f367de4471c4834a78c1e0059529c17cb7810287e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:48:56.745622 env[1180]: time="2024-04-12T18:48:56.745499761Z" level=info msg="CreateContainer within sandbox \"2f4589bfc36897e98acd86e923e26f5433efe775f5bc7e1e4f6171c940142ae6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38d54f5da901dd21dc347da7009b5a8f3919c3ee341c9450f0118c4c6f6aef26\"" Apr 12 18:48:56.763815 env[1180]: time="2024-04-12T18:48:56.761791918Z" level=info msg="StartContainer for \"38d54f5da901dd21dc347da7009b5a8f3919c3ee341c9450f0118c4c6f6aef26\"" Apr 12 18:48:56.821675 systemd[1]: Started cri-containerd-38d54f5da901dd21dc347da7009b5a8f3919c3ee341c9450f0118c4c6f6aef26.scope. Apr 12 18:48:57.027031 env[1180]: time="2024-04-12T18:48:57.026891607Z" level=info msg="StartContainer for \"38d54f5da901dd21dc347da7009b5a8f3919c3ee341c9450f0118c4c6f6aef26\" returns successfully" Apr 12 18:48:57.203438 kubelet[1695]: E0412 18:48:57.203375 1695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="3.2s" Apr 12 18:48:57.243580 kubelet[1695]: W0412 18:48:57.243108 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:57.243580 kubelet[1695]: E0412 18:48:57.243170 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:57.284675 kubelet[1695]: E0412 18:48:57.283612 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:57.288414 kubelet[1695]: W0412 18:48:57.288350 1695 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:57.288414 kubelet[1695]: E0412 18:48:57.288412 1695 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused Apr 12 18:48:57.316125 kubelet[1695]: I0412 18:48:57.315656 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:48:57.316125 kubelet[1695]: E0412 18:48:57.316079 1695 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" Apr 12 18:48:57.347285 env[1180]: time="2024-04-12T18:48:57.347047157Z" level=info msg="CreateContainer within sandbox \"66d37526e949d36d71b87ebf9bb50323eb8c93174bfeac61ca9d36d2d907fb4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a645451651424b6e364340b9f4af36ad5bf184e9758f2fef66c11063b1b25fb\"" Apr 12 18:48:57.354360 env[1180]: time="2024-04-12T18:48:57.351463934Z" level=info msg="StartContainer for \"1a645451651424b6e364340b9f4af36ad5bf184e9758f2fef66c11063b1b25fb\"" Apr 12 18:48:57.405950 systemd[1]: Started cri-containerd-1a645451651424b6e364340b9f4af36ad5bf184e9758f2fef66c11063b1b25fb.scope. Apr 12 18:48:57.562948 env[1180]: time="2024-04-12T18:48:57.562755569Z" level=info msg="CreateContainer within sandbox \"b7d024fd545377e42d08541f367de4471c4834a78c1e0059529c17cb7810287e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8a66c0a2f038993b3cb8f58c8b43cad834b64534ede0cf418e0283dc37fe17b\"" Apr 12 18:48:57.564027 env[1180]: time="2024-04-12T18:48:57.563992815Z" level=info msg="StartContainer for \"c8a66c0a2f038993b3cb8f58c8b43cad834b64534ede0cf418e0283dc37fe17b\"" Apr 12 18:48:57.580145 env[1180]: time="2024-04-12T18:48:57.580068695Z" level=info msg="StartContainer for \"1a645451651424b6e364340b9f4af36ad5bf184e9758f2fef66c11063b1b25fb\" returns successfully" Apr 12 18:48:57.616632 systemd[1]: Started cri-containerd-c8a66c0a2f038993b3cb8f58c8b43cad834b64534ede0cf418e0283dc37fe17b.scope. Apr 12 18:48:57.978345 env[1180]: time="2024-04-12T18:48:57.978238422Z" level=info msg="StartContainer for \"c8a66c0a2f038993b3cb8f58c8b43cad834b64534ede0cf418e0283dc37fe17b\" returns successfully" Apr 12 18:48:58.293029 kubelet[1695]: E0412 18:48:58.290501 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:58.293437 kubelet[1695]: E0412 18:48:58.293291 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:59.296194 kubelet[1695]: E0412 18:48:59.295110 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:48:59.296194 kubelet[1695]: E0412 18:48:59.295786 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:00.297322 kubelet[1695]: E0412 18:49:00.297016 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:00.518498 kubelet[1695]: I0412 18:49:00.518461 1695 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:49:01.084268 kubelet[1695]: E0412 18:49:01.084208 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:01.427488 kubelet[1695]: E0412 18:49:01.427336 1695 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 12 18:49:01.684403 kubelet[1695]: I0412 18:49:01.674679 1695 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:49:01.692421 kubelet[1695]: E0412 18:49:01.692383 1695 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17c59ce37dd387e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:48:54.166865895 +0000 UTC m=+0.584020980,LastTimestamp:2024-04-12 18:48:54.166865895 +0000 UTC m=+0.584020980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:49:01.904214 kubelet[1695]: E0412 18:49:01.903741 1695 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17c59ce37fe6be1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:48:54.201679386 +0000 UTC m=+0.618834481,LastTimestamp:2024-04-12 18:48:54.201679386 +0000 UTC m=+0.618834481,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:49:02.279177 kubelet[1695]: I0412 18:49:02.278905 1695 apiserver.go:52] "Watching apiserver" Apr 12 18:49:02.297905 kubelet[1695]: I0412 18:49:02.297842 1695 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:49:02.346730 kubelet[1695]: E0412 18:49:02.346277 1695 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17c59ce3830a839e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-04-12 18:48:54.254355358 +0000 UTC m=+0.671510443,LastTimestamp:2024-04-12 18:48:54.254355358 +0000 UTC m=+0.671510443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 12 18:49:05.965980 kubelet[1695]: E0412 18:49:05.965936 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:06.183990 kubelet[1695]: E0412 18:49:06.183915 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:06.314475 kubelet[1695]: E0412 18:49:06.314317 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:06.315560 kubelet[1695]: E0412 18:49:06.315094 1695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:07.356392 systemd[1]: Reloading. Apr 12 18:49:07.462691 /usr/lib/systemd/system-generators/torcx-generator[2011]: time="2024-04-12T18:49:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:49:07.462725 /usr/lib/systemd/system-generators/torcx-generator[2011]: time="2024-04-12T18:49:07Z" level=info msg="torcx already run" Apr 12 18:49:07.952234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:49:07.954534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:49:07.988906 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:49:08.151869 systemd[1]: Stopping kubelet.service... Apr 12 18:49:08.174995 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:49:08.175256 systemd[1]: Stopped kubelet.service. Apr 12 18:49:08.175327 systemd[1]: kubelet.service: Consumed 1.535s CPU time. Apr 12 18:49:08.183682 systemd[1]: Started kubelet.service. Apr 12 18:49:08.402189 sudo[2063]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:49:08.410346 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:49:08.462484 kubelet[2051]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:49:08.462484 kubelet[2051]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:49:08.462484 kubelet[2051]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:49:08.463037 kubelet[2051]: I0412 18:49:08.462555 2051 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:49:08.478499 kubelet[2051]: I0412 18:49:08.478431 2051 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:49:08.478499 kubelet[2051]: I0412 18:49:08.478487 2051 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:49:08.478744 kubelet[2051]: I0412 18:49:08.478731 2051 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:49:08.482357 kubelet[2051]: I0412 18:49:08.482298 2051 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:49:08.485568 kubelet[2051]: I0412 18:49:08.485497 2051 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:49:08.514448 kubelet[2051]: I0412 18:49:08.514400 2051 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:49:08.519788 kubelet[2051]: I0412 18:49:08.519736 2051 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:49:08.520517 kubelet[2051]: I0412 18:49:08.520478 2051 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:49:08.520977 kubelet[2051]: I0412 18:49:08.520955 2051 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:49:08.521083 kubelet[2051]: I0412 18:49:08.521065 2051 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:49:08.521323 kubelet[2051]: I0412 18:49:08.521303 2051 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:49:08.521562 kubelet[2051]: I0412 18:49:08.521543 2051 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:49:08.525422 kubelet[2051]: I0412 18:49:08.525373 2051 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:49:08.525578 kubelet[2051]: I0412 18:49:08.525438 2051 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:49:08.525578 kubelet[2051]: I0412 18:49:08.525547 2051 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:49:08.527441 kubelet[2051]: I0412 18:49:08.527412 2051 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:49:08.527891 kubelet[2051]: I0412 18:49:08.527846 2051 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:49:08.562657 kubelet[2051]: I0412 18:49:08.541351 2051 server.go:1256] "Started kubelet" Apr 12 18:49:08.562657 kubelet[2051]: I0412 18:49:08.543478 2051 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:49:08.562657 kubelet[2051]: I0412 18:49:08.562516 2051 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:49:08.563056 kubelet[2051]: I0412 18:49:08.556154 2051 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:49:08.565977 kubelet[2051]: I0412 18:49:08.564281 2051 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:49:08.565977 kubelet[2051]: I0412 18:49:08.564552 2051 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:49:08.565977 kubelet[2051]: I0412 18:49:08.557598 2051 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:49:08.565977 kubelet[2051]: E0412 18:49:08.564791 2051 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:49:08.565977 kubelet[2051]: I0412 18:49:08.565081 2051 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:49:08.567273 kubelet[2051]: I0412 18:49:08.567229 2051 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:49:08.575170 kubelet[2051]: I0412 18:49:08.573355 2051 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:49:08.575170 kubelet[2051]: I0412 18:49:08.573505 2051 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:49:08.575170 kubelet[2051]: E0412 18:49:08.573635 2051 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:49:08.576847 kubelet[2051]: I0412 18:49:08.576821 2051 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:49:08.609463 kubelet[2051]: I0412 18:49:08.609379 2051 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:49:08.612163 kubelet[2051]: I0412 18:49:08.612070 2051 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:49:08.612163 kubelet[2051]: I0412 18:49:08.612154 2051 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:49:08.612434 kubelet[2051]: I0412 18:49:08.612408 2051 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:49:08.623064 kubelet[2051]: E0412 18:49:08.615621 2051 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.654594 2051 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.654658 2051 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.654711 2051 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.655066 2051 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.655141 2051 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:49:08.656005 kubelet[2051]: I0412 18:49:08.655154 2051 policy_none.go:49] "None policy: Start" Apr 12 18:49:08.658575 kubelet[2051]: I0412 18:49:08.658259 2051 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:49:08.658575 kubelet[2051]: I0412 18:49:08.658325 2051 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:49:08.658575 kubelet[2051]: I0412 18:49:08.658646 2051 state_mem.go:75] "Updated machine memory state" Apr 12 18:49:08.674307 kubelet[2051]: I0412 18:49:08.672487 2051 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:49:08.674307 kubelet[2051]: I0412 18:49:08.672884 2051 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:49:08.685998 kubelet[2051]: I0412 18:49:08.684567 2051 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 12 18:49:08.719970 kubelet[2051]: I0412 18:49:08.718079 2051 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 12 18:49:08.719970 kubelet[2051]: I0412 18:49:08.719744 2051 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 12 18:49:08.724099 kubelet[2051]: I0412 18:49:08.724023 2051 topology_manager.go:215] "Topology Admit Handler" podUID="f4e8212a5db7e0401319814fa9ad65c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 12 18:49:08.724301 kubelet[2051]: I0412 18:49:08.724238 2051 topology_manager.go:215] "Topology Admit Handler" podUID="5d5c5aff921df216fcba2c51c322ceb1" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 12 18:49:08.725228 kubelet[2051]: I0412 18:49:08.725202 2051 topology_manager.go:215] "Topology Admit Handler" podUID="bc6258e9cc399f25cc9b7122b4e6b7d3" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 12 18:49:08.750987 kubelet[2051]: E0412 18:49:08.750932 2051 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 12 18:49:08.753493 kubelet[2051]: E0412 18:49:08.753452 2051 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.770291 kubelet[2051]: I0412 18:49:08.770230 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.770655 kubelet[2051]: I0412 18:49:08.770638 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.772865 kubelet[2051]: I0412 18:49:08.772825 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.773031 kubelet[2051]: I0412 18:49:08.773002 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.773150 kubelet[2051]: I0412 18:49:08.773133 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:49:08.773275 kubelet[2051]: I0412 18:49:08.773257 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e8212a5db7e0401319814fa9ad65c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f4e8212a5db7e0401319814fa9ad65c9\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:08.773386 kubelet[2051]: I0412 18:49:08.773368 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d5c5aff921df216fcba2c51c322ceb1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5d5c5aff921df216fcba2c51c322ceb1\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:49:08.773505 kubelet[2051]: I0412 18:49:08.773487 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:49:08.773617 kubelet[2051]: I0412 18:49:08.773600 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc6258e9cc399f25cc9b7122b4e6b7d3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6258e9cc399f25cc9b7122b4e6b7d3\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:49:09.048426 kubelet[2051]: E0412 18:49:09.048288 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.054146 kubelet[2051]: E0412 18:49:09.053145 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.056535 kubelet[2051]: E0412 18:49:09.056353 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.526136 kubelet[2051]: I0412 18:49:09.525976 2051 apiserver.go:52] "Watching apiserver" Apr 12 18:49:09.565009 kubelet[2051]: I0412 18:49:09.564921 2051 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:49:09.640573 kubelet[2051]: E0412 18:49:09.640521 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.689778 kubelet[2051]: E0412 18:49:09.688305 2051 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 12 18:49:09.689778 kubelet[2051]: E0412 18:49:09.688702 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.689778 kubelet[2051]: E0412 18:49:09.689127 2051 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 12 18:49:09.689778 kubelet[2051]: E0412 18:49:09.689504 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:09.839009 kubelet[2051]: I0412 18:49:09.838792 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.838731296 podStartE2EDuration="4.838731296s" podCreationTimestamp="2024-04-12 18:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:49:09.742897395 +0000 UTC m=+1.428742927" watchObservedRunningTime="2024-04-12 18:49:09.838731296 +0000 UTC m=+1.524576838" Apr 12 18:49:10.080053 kubelet[2051]: I0412 18:49:10.078740 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.078689687 podStartE2EDuration="4.078689687s" podCreationTimestamp="2024-04-12 18:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:49:10.078473667 +0000 UTC m=+1.764319199" watchObservedRunningTime="2024-04-12 18:49:10.078689687 +0000 UTC m=+1.764535219" Apr 12 18:49:10.080053 kubelet[2051]: I0412 18:49:10.078923 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.078898533 podStartE2EDuration="2.078898533s" podCreationTimestamp="2024-04-12 18:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:49:09.861929852 +0000 UTC m=+1.547775404" watchObservedRunningTime="2024-04-12 18:49:10.078898533 +0000 UTC m=+1.764744065" Apr 12 18:49:10.658647 kubelet[2051]: E0412 18:49:10.656352 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:10.658647 kubelet[2051]: E0412 18:49:10.656837 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:10.658647 kubelet[2051]: E0412 18:49:10.658130 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:11.612094 sudo[2063]: pam_unix(sudo:session): session closed for user root Apr 12 18:49:13.355590 sudo[1268]: pam_unix(sudo:session): session closed for user root Apr 12 18:49:13.365927 sshd[1264]: pam_unix(sshd:session): session closed for user core Apr 12 18:49:13.369929 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:55184.service: Deactivated successfully. Apr 12 18:49:13.370924 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:49:13.371101 systemd[1]: session-5.scope: Consumed 8.750s CPU time. Apr 12 18:49:13.378321 systemd-logind[1172]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:49:13.383865 systemd-logind[1172]: Removed session 5. Apr 12 18:49:16.232962 kubelet[2051]: E0412 18:49:16.228943 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:16.775508 kubelet[2051]: E0412 18:49:16.772514 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:18.023162 kubelet[2051]: E0412 18:49:18.023118 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:18.481975 kubelet[2051]: E0412 18:49:18.481890 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:18.786505 kubelet[2051]: E0412 18:49:18.780359 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:18.786505 kubelet[2051]: E0412 18:49:18.782725 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:19.782439 kubelet[2051]: E0412 18:49:19.782203 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:21.573232 kubelet[2051]: I0412 18:49:21.571834 2051 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:49:21.573232 kubelet[2051]: I0412 18:49:21.572648 2051 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:49:21.573800 env[1180]: time="2024-04-12T18:49:21.572333149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:49:21.673058 kubelet[2051]: I0412 18:49:21.673011 2051 topology_manager.go:215] "Topology Admit Handler" podUID="ac1652b2-42cb-4a9f-86d5-63544c1a77f8" podNamespace="kube-system" podName="kube-proxy-lhhtv" Apr 12 18:49:21.693574 systemd[1]: Created slice kubepods-besteffort-podac1652b2_42cb_4a9f_86d5_63544c1a77f8.slice. Apr 12 18:49:21.844929 kubelet[2051]: I0412 18:49:21.844435 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac1652b2-42cb-4a9f-86d5-63544c1a77f8-lib-modules\") pod \"kube-proxy-lhhtv\" (UID: \"ac1652b2-42cb-4a9f-86d5-63544c1a77f8\") " pod="kube-system/kube-proxy-lhhtv" Apr 12 18:49:21.844929 kubelet[2051]: I0412 18:49:21.844511 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac1652b2-42cb-4a9f-86d5-63544c1a77f8-kube-proxy\") pod \"kube-proxy-lhhtv\" (UID: \"ac1652b2-42cb-4a9f-86d5-63544c1a77f8\") " pod="kube-system/kube-proxy-lhhtv" Apr 12 18:49:21.844929 kubelet[2051]: I0412 18:49:21.844539 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6m4j\" (UniqueName: \"kubernetes.io/projected/ac1652b2-42cb-4a9f-86d5-63544c1a77f8-kube-api-access-x6m4j\") pod \"kube-proxy-lhhtv\" (UID: \"ac1652b2-42cb-4a9f-86d5-63544c1a77f8\") " pod="kube-system/kube-proxy-lhhtv" Apr 12 18:49:21.844929 kubelet[2051]: I0412 18:49:21.844566 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac1652b2-42cb-4a9f-86d5-63544c1a77f8-xtables-lock\") pod \"kube-proxy-lhhtv\" (UID: \"ac1652b2-42cb-4a9f-86d5-63544c1a77f8\") " pod="kube-system/kube-proxy-lhhtv" Apr 12 18:49:21.931673 kubelet[2051]: I0412 18:49:21.931613 2051 topology_manager.go:215] "Topology Admit Handler" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" podNamespace="kube-system" podName="cilium-29m6t" Apr 12 18:49:21.947387 systemd[1]: Created slice kubepods-burstable-pod4e4e70a9_9ae8_4c42_8c0c_d95a8d6d38f0.slice. Apr 12 18:49:22.031422 kubelet[2051]: E0412 18:49:22.029341 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:22.031422 kubelet[2051]: I0412 18:49:22.030816 2051 topology_manager.go:215] "Topology Admit Handler" podUID="d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" podNamespace="kube-system" podName="cilium-operator-5cc964979-rffv9" Apr 12 18:49:22.032809 env[1180]: time="2024-04-12T18:49:22.032055656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhhtv,Uid:ac1652b2-42cb-4a9f-86d5-63544c1a77f8,Namespace:kube-system,Attempt:0,}" Apr 12 18:49:22.045267 kubelet[2051]: I0412 18:49:22.045180 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cni-path\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045267 kubelet[2051]: I0412 18:49:22.045247 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-config-path\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045267 kubelet[2051]: I0412 18:49:22.045277 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-xtables-lock\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045304 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-net\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045332 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-clustermesh-secrets\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045363 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-bpf-maps\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045389 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hostproc\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045414 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-etc-cni-netd\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045586 kubelet[2051]: I0412 18:49:22.045440 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-kernel\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045880 kubelet[2051]: I0412 18:49:22.045465 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-run\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045880 kubelet[2051]: I0412 18:49:22.045491 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-cgroup\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045880 kubelet[2051]: I0412 18:49:22.045522 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbz4\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-kube-api-access-hqbz4\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045880 kubelet[2051]: I0412 18:49:22.045547 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-lib-modules\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.045880 kubelet[2051]: I0412 18:49:22.045574 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hubble-tls\") pod \"cilium-29m6t\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " pod="kube-system/cilium-29m6t" Apr 12 18:49:22.054305 systemd[1]: Created slice kubepods-besteffort-podd0fb4859_8eba_4fcf_8555_87a9dbbba1f8.slice. Apr 12 18:49:22.098440 env[1180]: time="2024-04-12T18:49:22.096730648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:49:22.098440 env[1180]: time="2024-04-12T18:49:22.096793265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:49:22.098440 env[1180]: time="2024-04-12T18:49:22.096807351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:49:22.098440 env[1180]: time="2024-04-12T18:49:22.097142987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/180ba30958313f5546355731fad3536e49d2db6b7dde10610fc78174e2442132 pid=2146 runtime=io.containerd.runc.v2 Apr 12 18:49:22.147233 kubelet[2051]: I0412 18:49:22.146361 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-cilium-config-path\") pod \"cilium-operator-5cc964979-rffv9\" (UID: \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\") " pod="kube-system/cilium-operator-5cc964979-rffv9" Apr 12 18:49:22.147233 kubelet[2051]: I0412 18:49:22.146485 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxrmt\" (UniqueName: \"kubernetes.io/projected/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-kube-api-access-zxrmt\") pod \"cilium-operator-5cc964979-rffv9\" (UID: \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\") " pod="kube-system/cilium-operator-5cc964979-rffv9" Apr 12 18:49:22.151467 systemd[1]: Started cri-containerd-180ba30958313f5546355731fad3536e49d2db6b7dde10610fc78174e2442132.scope. Apr 12 18:49:22.657739 kubelet[2051]: E0412 18:49:22.657138 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:22.660248 env[1180]: time="2024-04-12T18:49:22.658647023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rffv9,Uid:d0fb4859-8eba-4fcf-8555-87a9dbbba1f8,Namespace:kube-system,Attempt:0,}" Apr 12 18:49:22.671224 env[1180]: time="2024-04-12T18:49:22.671131149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhhtv,Uid:ac1652b2-42cb-4a9f-86d5-63544c1a77f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"180ba30958313f5546355731fad3536e49d2db6b7dde10610fc78174e2442132\"" Apr 12 18:49:22.672392 kubelet[2051]: E0412 18:49:22.672329 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:22.693426 env[1180]: time="2024-04-12T18:49:22.693348065Z" level=info msg="CreateContainer within sandbox \"180ba30958313f5546355731fad3536e49d2db6b7dde10610fc78174e2442132\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:49:22.741549 env[1180]: time="2024-04-12T18:49:22.738630125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:49:22.741549 env[1180]: time="2024-04-12T18:49:22.741470408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:49:22.741808 env[1180]: time="2024-04-12T18:49:22.741553984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:49:22.743683 env[1180]: time="2024-04-12T18:49:22.741897443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c pid=2190 runtime=io.containerd.runc.v2 Apr 12 18:49:22.769762 env[1180]: time="2024-04-12T18:49:22.769665043Z" level=info msg="CreateContainer within sandbox \"180ba30958313f5546355731fad3536e49d2db6b7dde10610fc78174e2442132\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56cf58d4c9047a683af007456376fc1ece52376a41578aba8dae336e94a42e74\"" Apr 12 18:49:22.771282 env[1180]: time="2024-04-12T18:49:22.771228877Z" level=info msg="StartContainer for \"56cf58d4c9047a683af007456376fc1ece52376a41578aba8dae336e94a42e74\"" Apr 12 18:49:22.783271 systemd[1]: Started cri-containerd-5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c.scope. Apr 12 18:49:22.870565 kubelet[2051]: E0412 18:49:22.870492 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:22.872968 env[1180]: time="2024-04-12T18:49:22.871539442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29m6t,Uid:4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0,Namespace:kube-system,Attempt:0,}" Apr 12 18:49:22.970133 systemd[1]: Started cri-containerd-56cf58d4c9047a683af007456376fc1ece52376a41578aba8dae336e94a42e74.scope. Apr 12 18:49:23.227000 env[1180]: time="2024-04-12T18:49:23.226550157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:49:23.227000 env[1180]: time="2024-04-12T18:49:23.226617613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:49:23.227000 env[1180]: time="2024-04-12T18:49:23.226635376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:49:23.227902 env[1180]: time="2024-04-12T18:49:23.227585788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28 pid=2244 runtime=io.containerd.runc.v2 Apr 12 18:49:23.264594 systemd[1]: Started cri-containerd-638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28.scope. Apr 12 18:49:23.318669 env[1180]: time="2024-04-12T18:49:23.318611146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-rffv9,Uid:d0fb4859-8eba-4fcf-8555-87a9dbbba1f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\"" Apr 12 18:49:23.319762 kubelet[2051]: E0412 18:49:23.319720 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:23.321748 env[1180]: time="2024-04-12T18:49:23.321413961Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:49:23.335055 env[1180]: time="2024-04-12T18:49:23.334987899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29m6t,Uid:4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\"" Apr 12 18:49:23.341273 kubelet[2051]: E0412 18:49:23.336653 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:23.377714 env[1180]: time="2024-04-12T18:49:23.377542107Z" level=info msg="StartContainer for \"56cf58d4c9047a683af007456376fc1ece52376a41578aba8dae336e94a42e74\" returns successfully" Apr 12 18:49:23.802902 kubelet[2051]: E0412 18:49:23.801939 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:23.874917 kubelet[2051]: I0412 18:49:23.866921 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhhtv" podStartSLOduration=2.866871545 podStartE2EDuration="2.866871545s" podCreationTimestamp="2024-04-12 18:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:49:23.8643534 +0000 UTC m=+15.550198932" watchObservedRunningTime="2024-04-12 18:49:23.866871545 +0000 UTC m=+15.552717077" Apr 12 18:49:24.642394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408966751.mount: Deactivated successfully. Apr 12 18:49:24.827906 kubelet[2051]: E0412 18:49:24.825150 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:25.972254 env[1180]: time="2024-04-12T18:49:25.972159227Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:25.975526 env[1180]: time="2024-04-12T18:49:25.975447694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:25.978718 env[1180]: time="2024-04-12T18:49:25.978634209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:25.979065 env[1180]: time="2024-04-12T18:49:25.979013596Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:49:25.985586 env[1180]: time="2024-04-12T18:49:25.985499358Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:49:25.987379 env[1180]: time="2024-04-12T18:49:25.987307391Z" level=info msg="CreateContainer within sandbox \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:49:26.039837 env[1180]: time="2024-04-12T18:49:26.037362063Z" level=info msg="CreateContainer within sandbox \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\"" Apr 12 18:49:26.039837 env[1180]: time="2024-04-12T18:49:26.039299640Z" level=info msg="StartContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\"" Apr 12 18:49:26.095958 systemd[1]: Started cri-containerd-bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672.scope. Apr 12 18:49:26.157703 env[1180]: time="2024-04-12T18:49:26.157501902Z" level=info msg="StartContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" returns successfully" Apr 12 18:49:26.878660 kubelet[2051]: E0412 18:49:26.878619 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:27.882191 kubelet[2051]: E0412 18:49:27.882102 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:28.651082 kubelet[2051]: I0412 18:49:28.650693 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-rffv9" podStartSLOduration=4.990926681 podStartE2EDuration="7.650634598s" podCreationTimestamp="2024-04-12 18:49:21 +0000 UTC" firstStartedPulling="2024-04-12 18:49:23.320622977 +0000 UTC m=+15.006468509" lastFinishedPulling="2024-04-12 18:49:25.980330894 +0000 UTC m=+17.666176426" observedRunningTime="2024-04-12 18:49:26.961882363 +0000 UTC m=+18.647727895" watchObservedRunningTime="2024-04-12 18:49:28.650634598 +0000 UTC m=+20.336480130" Apr 12 18:49:36.190895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730294674.mount: Deactivated successfully. Apr 12 18:49:42.604982 env[1180]: time="2024-04-12T18:49:42.604891723Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:42.615564 env[1180]: time="2024-04-12T18:49:42.615451555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:42.624358 env[1180]: time="2024-04-12T18:49:42.624290475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:49:42.624956 env[1180]: time="2024-04-12T18:49:42.624923670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:49:42.635215 env[1180]: time="2024-04-12T18:49:42.635148807Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:49:42.749066 env[1180]: time="2024-04-12T18:49:42.740783115Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\"" Apr 12 18:49:42.749066 env[1180]: time="2024-04-12T18:49:42.741738835Z" level=info msg="StartContainer for \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\"" Apr 12 18:49:42.822937 systemd[1]: Started cri-containerd-b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba.scope. Apr 12 18:49:42.927041 env[1180]: time="2024-04-12T18:49:42.926817566Z" level=info msg="StartContainer for \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\" returns successfully" Apr 12 18:49:42.928948 systemd[1]: cri-containerd-b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba.scope: Deactivated successfully. Apr 12 18:49:42.995478 kubelet[2051]: E0412 18:49:42.993003 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:43.670081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba-rootfs.mount: Deactivated successfully. Apr 12 18:49:44.002252 kubelet[2051]: E0412 18:49:43.996501 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:44.262937 env[1180]: time="2024-04-12T18:49:44.262738791Z" level=info msg="shim disconnected" id=b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba Apr 12 18:49:44.262937 env[1180]: time="2024-04-12T18:49:44.262803411Z" level=warning msg="cleaning up after shim disconnected" id=b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba namespace=k8s.io Apr 12 18:49:44.262937 env[1180]: time="2024-04-12T18:49:44.262816927Z" level=info msg="cleaning up dead shim" Apr 12 18:49:44.322638 env[1180]: time="2024-04-12T18:49:44.309487241Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\n" Apr 12 18:49:45.007250 kubelet[2051]: E0412 18:49:45.006990 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:45.012807 env[1180]: time="2024-04-12T18:49:45.012428010Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:49:45.572813 env[1180]: time="2024-04-12T18:49:45.572729764Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\"" Apr 12 18:49:45.573594 env[1180]: time="2024-04-12T18:49:45.573554953Z" level=info msg="StartContainer for \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\"" Apr 12 18:49:45.618321 systemd[1]: Started cri-containerd-b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8.scope. Apr 12 18:49:45.732214 env[1180]: time="2024-04-12T18:49:45.731824275Z" level=info msg="StartContainer for \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\" returns successfully" Apr 12 18:49:45.753636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:49:45.753994 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:49:45.755318 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:49:45.757699 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:49:45.768892 systemd[1]: cri-containerd-b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8.scope: Deactivated successfully. Apr 12 18:49:45.846715 env[1180]: time="2024-04-12T18:49:45.846535816Z" level=info msg="shim disconnected" id=b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8 Apr 12 18:49:45.846715 env[1180]: time="2024-04-12T18:49:45.846603893Z" level=warning msg="cleaning up after shim disconnected" id=b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8 namespace=k8s.io Apr 12 18:49:45.846715 env[1180]: time="2024-04-12T18:49:45.846615656Z" level=info msg="cleaning up dead shim" Apr 12 18:49:45.853009 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:49:45.864475 env[1180]: time="2024-04-12T18:49:45.863270612Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2576 runtime=io.containerd.runc.v2\n" Apr 12 18:49:46.017803 kubelet[2051]: E0412 18:49:46.017576 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:46.029224 env[1180]: time="2024-04-12T18:49:46.028735174Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:49:46.114293 env[1180]: time="2024-04-12T18:49:46.114069747Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\"" Apr 12 18:49:46.115134 env[1180]: time="2024-04-12T18:49:46.115089566Z" level=info msg="StartContainer for \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\"" Apr 12 18:49:46.147028 systemd[1]: Started cri-containerd-558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee.scope. Apr 12 18:49:46.197740 systemd[1]: cri-containerd-558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee.scope: Deactivated successfully. Apr 12 18:49:46.208763 env[1180]: time="2024-04-12T18:49:46.208658993Z" level=info msg="StartContainer for \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\" returns successfully" Apr 12 18:49:46.319572 env[1180]: time="2024-04-12T18:49:46.319087939Z" level=info msg="shim disconnected" id=558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee Apr 12 18:49:46.319572 env[1180]: time="2024-04-12T18:49:46.319158375Z" level=warning msg="cleaning up after shim disconnected" id=558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee namespace=k8s.io Apr 12 18:49:46.319572 env[1180]: time="2024-04-12T18:49:46.319172082Z" level=info msg="cleaning up dead shim" Apr 12 18:49:46.350181 env[1180]: time="2024-04-12T18:49:46.350106274Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:49:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" Apr 12 18:49:46.368513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8-rootfs.mount: Deactivated successfully. Apr 12 18:49:47.040450 kubelet[2051]: E0412 18:49:47.040218 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:47.047100 env[1180]: time="2024-04-12T18:49:47.047018910Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:49:47.111118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311398075.mount: Deactivated successfully. Apr 12 18:49:47.172152 env[1180]: time="2024-04-12T18:49:47.172051857Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\"" Apr 12 18:49:47.173023 env[1180]: time="2024-04-12T18:49:47.172967605Z" level=info msg="StartContainer for \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\"" Apr 12 18:49:47.206547 systemd[1]: Started cri-containerd-e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65.scope. Apr 12 18:49:47.303812 systemd[1]: cri-containerd-e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65.scope: Deactivated successfully. Apr 12 18:49:47.346897 env[1180]: time="2024-04-12T18:49:47.346760645Z" level=info msg="StartContainer for \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\" returns successfully" Apr 12 18:49:47.432129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65-rootfs.mount: Deactivated successfully. Apr 12 18:49:47.454952 env[1180]: time="2024-04-12T18:49:47.452251792Z" level=info msg="shim disconnected" id=e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65 Apr 12 18:49:47.454952 env[1180]: time="2024-04-12T18:49:47.453772116Z" level=warning msg="cleaning up after shim disconnected" id=e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65 namespace=k8s.io Apr 12 18:49:47.457690 env[1180]: time="2024-04-12T18:49:47.455346284Z" level=info msg="cleaning up dead shim" Apr 12 18:49:47.504683 env[1180]: time="2024-04-12T18:49:47.504567477Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2685 runtime=io.containerd.runc.v2\n" Apr 12 18:49:48.055796 kubelet[2051]: E0412 18:49:48.051673 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:48.056905 env[1180]: time="2024-04-12T18:49:48.056710892Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:49:48.124503 env[1180]: time="2024-04-12T18:49:48.122181133Z" level=info msg="CreateContainer within sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\"" Apr 12 18:49:48.124503 env[1180]: time="2024-04-12T18:49:48.123165622Z" level=info msg="StartContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\"" Apr 12 18:49:48.165368 systemd[1]: Started cri-containerd-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8.scope. Apr 12 18:49:48.368873 env[1180]: time="2024-04-12T18:49:48.368791912Z" level=info msg="StartContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" returns successfully" Apr 12 18:49:48.419991 systemd[1]: run-containerd-runc-k8s.io-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8-runc.hq2YDp.mount: Deactivated successfully. Apr 12 18:49:48.596468 kubelet[2051]: I0412 18:49:48.594983 2051 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:49:49.060026 kubelet[2051]: E0412 18:49:49.059988 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:49.090155 kubelet[2051]: I0412 18:49:49.090112 2051 topology_manager.go:215] "Topology Admit Handler" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" podNamespace="kube-system" podName="coredns-76f75df574-57n7x" Apr 12 18:49:49.098489 systemd[1]: Created slice kubepods-burstable-pod0330692a_ed83_4bca_8f95_8f7034e452b0.slice. Apr 12 18:49:49.105869 kubelet[2051]: I0412 18:49:49.105807 2051 topology_manager.go:215] "Topology Admit Handler" podUID="5fe382df-cc45-4271-a2ae-0519410aea46" podNamespace="kube-system" podName="coredns-76f75df574-kbs7q" Apr 12 18:49:49.113008 systemd[1]: Created slice kubepods-burstable-pod5fe382df_cc45_4271_a2ae_0519410aea46.slice. Apr 12 18:49:49.235730 kubelet[2051]: I0412 18:49:49.235673 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fe382df-cc45-4271-a2ae-0519410aea46-config-volume\") pod \"coredns-76f75df574-kbs7q\" (UID: \"5fe382df-cc45-4271-a2ae-0519410aea46\") " pod="kube-system/coredns-76f75df574-kbs7q" Apr 12 18:49:49.236114 kubelet[2051]: I0412 18:49:49.236097 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqr8\" (UniqueName: \"kubernetes.io/projected/0330692a-ed83-4bca-8f95-8f7034e452b0-kube-api-access-9nqr8\") pod \"coredns-76f75df574-57n7x\" (UID: \"0330692a-ed83-4bca-8f95-8f7034e452b0\") " pod="kube-system/coredns-76f75df574-57n7x" Apr 12 18:49:49.236273 kubelet[2051]: I0412 18:49:49.236254 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0330692a-ed83-4bca-8f95-8f7034e452b0-config-volume\") pod \"coredns-76f75df574-57n7x\" (UID: \"0330692a-ed83-4bca-8f95-8f7034e452b0\") " pod="kube-system/coredns-76f75df574-57n7x" Apr 12 18:49:49.237691 kubelet[2051]: I0412 18:49:49.237671 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx925\" (UniqueName: \"kubernetes.io/projected/5fe382df-cc45-4271-a2ae-0519410aea46-kube-api-access-qx925\") pod \"coredns-76f75df574-kbs7q\" (UID: \"5fe382df-cc45-4271-a2ae-0519410aea46\") " pod="kube-system/coredns-76f75df574-kbs7q" Apr 12 18:49:49.467979 kubelet[2051]: I0412 18:49:49.466420 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-29m6t" podStartSLOduration=9.180519534 podStartE2EDuration="28.466364355s" podCreationTimestamp="2024-04-12 18:49:21 +0000 UTC" firstStartedPulling="2024-04-12 18:49:23.342219541 +0000 UTC m=+15.028065073" lastFinishedPulling="2024-04-12 18:49:42.628064371 +0000 UTC m=+34.313909894" observedRunningTime="2024-04-12 18:49:49.265080347 +0000 UTC m=+40.950925879" watchObservedRunningTime="2024-04-12 18:49:49.466364355 +0000 UTC m=+41.152209897" Apr 12 18:49:49.704109 kubelet[2051]: E0412 18:49:49.703650 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:49.717528 kubelet[2051]: E0412 18:49:49.717472 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:49.719797 env[1180]: time="2024-04-12T18:49:49.719655970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kbs7q,Uid:5fe382df-cc45-4271-a2ae-0519410aea46,Namespace:kube-system,Attempt:0,}" Apr 12 18:49:49.720406 env[1180]: time="2024-04-12T18:49:49.720299179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57n7x,Uid:0330692a-ed83-4bca-8f95-8f7034e452b0,Namespace:kube-system,Attempt:0,}" Apr 12 18:49:50.875112 kubelet[2051]: E0412 18:49:50.873464 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:51.552590 systemd-networkd[1083]: cilium_host: Link UP Apr 12 18:49:51.559610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:49:51.559828 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:49:51.556048 systemd-networkd[1083]: cilium_net: Link UP Apr 12 18:49:51.556333 systemd-networkd[1083]: cilium_net: Gained carrier Apr 12 18:49:51.560341 systemd-networkd[1083]: cilium_host: Gained carrier Apr 12 18:49:51.790543 systemd-networkd[1083]: cilium_vxlan: Link UP Apr 12 18:49:51.790553 systemd-networkd[1083]: cilium_vxlan: Gained carrier Apr 12 18:49:51.818133 systemd-networkd[1083]: cilium_net: Gained IPv6LL Apr 12 18:49:52.151912 kernel: NET: Registered PF_ALG protocol family Apr 12 18:49:52.497011 systemd-networkd[1083]: cilium_host: Gained IPv6LL Apr 12 18:49:53.140017 systemd-networkd[1083]: cilium_vxlan: Gained IPv6LL Apr 12 18:49:53.675309 systemd-networkd[1083]: lxc_health: Link UP Apr 12 18:49:53.699903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:49:53.700109 systemd-networkd[1083]: lxc_health: Gained carrier Apr 12 18:49:54.261865 systemd-networkd[1083]: lxcf970a975fda6: Link UP Apr 12 18:49:54.285912 kernel: eth0: renamed from tmp77a03 Apr 12 18:49:54.297341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf970a975fda6: link becomes ready Apr 12 18:49:54.292722 systemd-networkd[1083]: lxcf970a975fda6: Gained carrier Apr 12 18:49:54.367008 systemd-networkd[1083]: lxc5b7cfb379f36: Link UP Apr 12 18:49:54.394096 kernel: eth0: renamed from tmpb4699 Apr 12 18:49:54.400944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5b7cfb379f36: link becomes ready Apr 12 18:49:54.401503 systemd-networkd[1083]: lxc5b7cfb379f36: Gained carrier Apr 12 18:49:54.873615 kubelet[2051]: E0412 18:49:54.873543 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:55.099612 kubelet[2051]: E0412 18:49:55.099557 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:49:55.440267 systemd-networkd[1083]: lxcf970a975fda6: Gained IPv6LL Apr 12 18:49:55.569190 systemd-networkd[1083]: lxc_health: Gained IPv6LL Apr 12 18:49:55.760125 systemd-networkd[1083]: lxc5b7cfb379f36: Gained IPv6LL Apr 12 18:49:56.106522 kubelet[2051]: E0412 18:49:56.105629 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:00.663753 env[1180]: time="2024-04-12T18:50:00.662746291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:50:00.663753 env[1180]: time="2024-04-12T18:50:00.662809952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:50:00.663753 env[1180]: time="2024-04-12T18:50:00.662822076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:50:00.666564 env[1180]: time="2024-04-12T18:50:00.665422121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20 pid=3259 runtime=io.containerd.runc.v2 Apr 12 18:50:00.681819 env[1180]: time="2024-04-12T18:50:00.681629845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:50:00.681819 env[1180]: time="2024-04-12T18:50:00.681711802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:50:00.682433 env[1180]: time="2024-04-12T18:50:00.681728895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:50:00.682433 env[1180]: time="2024-04-12T18:50:00.681943645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b46993ae1f2e4d4b294a790dbcf8313df7c6e31dbaedc918f894352abdf17742 pid=3271 runtime=io.containerd.runc.v2 Apr 12 18:50:00.722531 systemd[1]: run-containerd-runc-k8s.io-77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20-runc.R9FCHV.mount: Deactivated successfully. Apr 12 18:50:00.737693 systemd[1]: Started cri-containerd-77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20.scope. Apr 12 18:50:00.783261 systemd[1]: Started cri-containerd-b46993ae1f2e4d4b294a790dbcf8313df7c6e31dbaedc918f894352abdf17742.scope. Apr 12 18:50:00.863375 systemd-resolved[1124]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:50:00.880073 systemd-resolved[1124]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:50:00.950890 env[1180]: time="2024-04-12T18:50:00.950294596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kbs7q,Uid:5fe382df-cc45-4271-a2ae-0519410aea46,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20\"" Apr 12 18:50:00.952553 kubelet[2051]: E0412 18:50:00.952234 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:00.957314 env[1180]: time="2024-04-12T18:50:00.957260385Z" level=info msg="CreateContainer within sandbox \"77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:50:00.976174 env[1180]: time="2024-04-12T18:50:00.976060199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-57n7x,Uid:0330692a-ed83-4bca-8f95-8f7034e452b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b46993ae1f2e4d4b294a790dbcf8313df7c6e31dbaedc918f894352abdf17742\"" Apr 12 18:50:00.979783 kubelet[2051]: E0412 18:50:00.977622 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:00.984656 env[1180]: time="2024-04-12T18:50:00.984601743Z" level=info msg="CreateContainer within sandbox \"b46993ae1f2e4d4b294a790dbcf8313df7c6e31dbaedc918f894352abdf17742\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:50:01.037592 env[1180]: time="2024-04-12T18:50:01.037228701Z" level=info msg="CreateContainer within sandbox \"77a03c5a875fb090de5bf3b4c00ac18ecfa78f63b56d39c84a26570f5950ef20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95dd7512067bd037fe91e61352ab0d21367ac529defc77944d2234a34b39a3e9\"" Apr 12 18:50:01.042775 env[1180]: time="2024-04-12T18:50:01.038490494Z" level=info msg="StartContainer for \"95dd7512067bd037fe91e61352ab0d21367ac529defc77944d2234a34b39a3e9\"" Apr 12 18:50:01.092709 env[1180]: time="2024-04-12T18:50:01.091776968Z" level=info msg="CreateContainer within sandbox \"b46993ae1f2e4d4b294a790dbcf8313df7c6e31dbaedc918f894352abdf17742\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d1c0c3f86d3d7eb00bc39874fdf48467d1970126c3c26bb37bdf32523c1fcca\"" Apr 12 18:50:01.093209 env[1180]: time="2024-04-12T18:50:01.093177737Z" level=info msg="StartContainer for \"8d1c0c3f86d3d7eb00bc39874fdf48467d1970126c3c26bb37bdf32523c1fcca\"" Apr 12 18:50:01.109107 systemd[1]: Started cri-containerd-95dd7512067bd037fe91e61352ab0d21367ac529defc77944d2234a34b39a3e9.scope. Apr 12 18:50:01.165740 systemd[1]: Started cri-containerd-8d1c0c3f86d3d7eb00bc39874fdf48467d1970126c3c26bb37bdf32523c1fcca.scope. Apr 12 18:50:01.431904 env[1180]: time="2024-04-12T18:50:01.431603773Z" level=info msg="StartContainer for \"95dd7512067bd037fe91e61352ab0d21367ac529defc77944d2234a34b39a3e9\" returns successfully" Apr 12 18:50:01.512482 env[1180]: time="2024-04-12T18:50:01.512392455Z" level=info msg="StartContainer for \"8d1c0c3f86d3d7eb00bc39874fdf48467d1970126c3c26bb37bdf32523c1fcca\" returns successfully" Apr 12 18:50:02.134506 kubelet[2051]: E0412 18:50:02.134456 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:02.136231 kubelet[2051]: E0412 18:50:02.136182 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:02.157010 kubelet[2051]: I0412 18:50:02.156738 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-57n7x" podStartSLOduration=41.156668535 podStartE2EDuration="41.156668535s" podCreationTimestamp="2024-04-12 18:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:50:02.155986602 +0000 UTC m=+53.841832154" watchObservedRunningTime="2024-04-12 18:50:02.156668535 +0000 UTC m=+53.842514067" Apr 12 18:50:03.141568 kubelet[2051]: E0412 18:50:03.141517 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:03.142167 kubelet[2051]: E0412 18:50:03.141787 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:04.154486 kubelet[2051]: E0412 18:50:04.152652 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:04.154486 kubelet[2051]: E0412 18:50:04.153738 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:06.238102 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:39770.service. Apr 12 18:50:06.329964 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 39770 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:06.335877 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:06.366960 systemd-logind[1172]: New session 6 of user core. Apr 12 18:50:06.367786 systemd[1]: Started session-6.scope. Apr 12 18:50:06.706756 sshd[3422]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:06.716743 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:39770.service: Deactivated successfully. Apr 12 18:50:06.718331 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:50:06.724154 systemd-logind[1172]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:50:06.729432 systemd-logind[1172]: Removed session 6. Apr 12 18:50:11.715686 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:39824.service. Apr 12 18:50:11.776662 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 39824 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:11.779490 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:11.799152 systemd-logind[1172]: New session 7 of user core. Apr 12 18:50:11.800668 systemd[1]: Started session-7.scope. Apr 12 18:50:12.006044 sshd[3441]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:12.012092 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:39824.service: Deactivated successfully. Apr 12 18:50:12.013162 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:50:12.015248 systemd-logind[1172]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:50:12.016636 systemd-logind[1172]: Removed session 7. Apr 12 18:50:17.010943 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:39838.service. Apr 12 18:50:17.046703 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 39838 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:17.047756 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:17.051678 systemd-logind[1172]: New session 8 of user core. Apr 12 18:50:17.052907 systemd[1]: Started session-8.scope. Apr 12 18:50:17.166096 sshd[3455]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:17.169315 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:39838.service: Deactivated successfully. Apr 12 18:50:17.170371 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:50:17.171500 systemd-logind[1172]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:50:17.172452 systemd-logind[1172]: Removed session 8. Apr 12 18:50:22.172010 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:42950.service. Apr 12 18:50:22.208216 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 42950 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:22.209617 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:22.213099 systemd-logind[1172]: New session 9 of user core. Apr 12 18:50:22.213886 systemd[1]: Started session-9.scope. Apr 12 18:50:22.314485 sshd[3471]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:22.316813 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:42950.service: Deactivated successfully. Apr 12 18:50:22.317464 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:50:22.318143 systemd-logind[1172]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:50:22.318869 systemd-logind[1172]: Removed session 9. Apr 12 18:50:26.613970 kubelet[2051]: E0412 18:50:26.613905 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:27.319520 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:42966.service. Apr 12 18:50:27.354022 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 42966 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:27.355378 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:27.359164 systemd-logind[1172]: New session 10 of user core. Apr 12 18:50:27.360038 systemd[1]: Started session-10.scope. Apr 12 18:50:27.465918 sshd[3488]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:27.468134 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:42966.service: Deactivated successfully. Apr 12 18:50:27.468796 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:50:27.469358 systemd-logind[1172]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:50:27.470067 systemd-logind[1172]: Removed session 10. Apr 12 18:50:27.613228 kubelet[2051]: E0412 18:50:27.613183 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:32.470878 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:46164.service. Apr 12 18:50:32.505256 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:32.506447 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:32.509900 systemd-logind[1172]: New session 11 of user core. Apr 12 18:50:32.510876 systemd[1]: Started session-11.scope. Apr 12 18:50:32.618472 sshd[3503]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:32.622344 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:46164.service: Deactivated successfully. Apr 12 18:50:32.623143 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:50:32.626585 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:46166.service. Apr 12 18:50:32.627598 systemd-logind[1172]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:50:32.628745 systemd-logind[1172]: Removed session 11. Apr 12 18:50:32.664194 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 46166 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:32.665776 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:32.670308 systemd-logind[1172]: New session 12 of user core. Apr 12 18:50:32.671534 systemd[1]: Started session-12.scope. Apr 12 18:50:32.829522 sshd[3517]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:32.833639 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:46174.service. Apr 12 18:50:32.838793 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:46166.service: Deactivated successfully. Apr 12 18:50:32.839421 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:50:32.844390 systemd-logind[1172]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:50:32.848000 systemd-logind[1172]: Removed session 12. Apr 12 18:50:32.873491 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 46174 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:32.875017 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:32.878628 systemd-logind[1172]: New session 13 of user core. Apr 12 18:50:32.879467 systemd[1]: Started session-13.scope. Apr 12 18:50:32.984044 sshd[3527]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:32.986543 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:46174.service: Deactivated successfully. Apr 12 18:50:32.987410 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:50:32.988169 systemd-logind[1172]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:50:32.989097 systemd-logind[1172]: Removed session 13. Apr 12 18:50:37.613660 kubelet[2051]: E0412 18:50:37.613592 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:37.987982 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:46180.service. Apr 12 18:50:38.021342 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:38.022589 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:38.025742 systemd-logind[1172]: New session 14 of user core. Apr 12 18:50:38.026490 systemd[1]: Started session-14.scope. Apr 12 18:50:38.129208 sshd[3541]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:38.131241 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:46180.service: Deactivated successfully. Apr 12 18:50:38.132070 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:50:38.132661 systemd-logind[1172]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:50:38.133438 systemd-logind[1172]: Removed session 14. Apr 12 18:50:41.613967 kubelet[2051]: E0412 18:50:41.613918 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:43.133361 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:40488.service. Apr 12 18:50:43.172711 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 40488 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:43.173678 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:43.177288 systemd-logind[1172]: New session 15 of user core. Apr 12 18:50:43.178086 systemd[1]: Started session-15.scope. Apr 12 18:50:43.281247 sshd[3554]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:43.284128 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:40488.service: Deactivated successfully. Apr 12 18:50:43.284633 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:50:43.285088 systemd-logind[1172]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:50:43.286099 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:40496.service. Apr 12 18:50:43.286744 systemd-logind[1172]: Removed session 15. Apr 12 18:50:43.319326 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 40496 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:43.320486 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:43.324085 systemd-logind[1172]: New session 16 of user core. Apr 12 18:50:43.324902 systemd[1]: Started session-16.scope. Apr 12 18:50:43.553596 sshd[3567]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:43.557543 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:40498.service. Apr 12 18:50:43.558495 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:40496.service: Deactivated successfully. Apr 12 18:50:43.559095 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:50:43.559758 systemd-logind[1172]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:50:43.560755 systemd-logind[1172]: Removed session 16. Apr 12 18:50:43.594729 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:43.596190 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:43.600256 systemd-logind[1172]: New session 17 of user core. Apr 12 18:50:43.601211 systemd[1]: Started session-17.scope. Apr 12 18:50:45.065553 sshd[3577]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:45.068985 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:40498.service: Deactivated successfully. Apr 12 18:50:45.069621 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:50:45.070317 systemd-logind[1172]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:50:45.071941 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:40504.service. Apr 12 18:50:45.073476 systemd-logind[1172]: Removed session 17. Apr 12 18:50:45.110896 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 40504 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:45.112475 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:45.119175 systemd-logind[1172]: New session 18 of user core. Apr 12 18:50:45.120127 systemd[1]: Started session-18.scope. Apr 12 18:50:45.374395 sshd[3598]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:45.378603 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:40504.service: Deactivated successfully. Apr 12 18:50:45.381417 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:50:45.382407 systemd-logind[1172]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:50:45.384576 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:40506.service. Apr 12 18:50:45.385825 systemd-logind[1172]: Removed session 18. Apr 12 18:50:45.422479 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 40506 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:45.424083 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:45.428587 systemd-logind[1172]: New session 19 of user core. Apr 12 18:50:45.429622 systemd[1]: Started session-19.scope. Apr 12 18:50:45.540000 sshd[3610]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:45.543314 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:40506.service: Deactivated successfully. Apr 12 18:50:45.544426 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:50:45.545247 systemd-logind[1172]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:50:45.546347 systemd-logind[1172]: Removed session 19. Apr 12 18:50:50.544296 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:52968.service. Apr 12 18:50:50.577931 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 52968 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:50.578897 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:50.582355 systemd-logind[1172]: New session 20 of user core. Apr 12 18:50:50.583520 systemd[1]: Started session-20.scope. Apr 12 18:50:50.734245 sshd[3624]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:50.736767 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:52968.service: Deactivated successfully. Apr 12 18:50:50.737580 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:50:50.738178 systemd-logind[1172]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:50:50.739003 systemd-logind[1172]: Removed session 20. Apr 12 18:50:55.739323 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:52982.service. Apr 12 18:50:55.773131 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:50:55.774197 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:50:55.777348 systemd-logind[1172]: New session 21 of user core. Apr 12 18:50:55.778165 systemd[1]: Started session-21.scope. Apr 12 18:50:55.877971 sshd[3643]: pam_unix(sshd:session): session closed for user core Apr 12 18:50:55.880636 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:52982.service: Deactivated successfully. Apr 12 18:50:55.881316 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:50:55.882120 systemd-logind[1172]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:50:55.883631 systemd-logind[1172]: Removed session 21. Apr 12 18:50:57.613825 kubelet[2051]: E0412 18:50:57.613767 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:50:57.614223 kubelet[2051]: E0412 18:50:57.613986 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:00.883036 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:53396.service. Apr 12 18:51:00.916245 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:00.917159 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:00.920197 systemd-logind[1172]: New session 22 of user core. Apr 12 18:51:00.920978 systemd[1]: Started session-22.scope. Apr 12 18:51:01.019513 sshd[3657]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:01.022045 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:53396.service: Deactivated successfully. Apr 12 18:51:01.022778 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:51:01.023436 systemd-logind[1172]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:51:01.024071 systemd-logind[1172]: Removed session 22. Apr 12 18:51:04.613913 kubelet[2051]: E0412 18:51:04.613837 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:06.023899 systemd[1]: Started sshd@22-10.0.0.68:22-10.0.0.1:53412.service. Apr 12 18:51:06.057069 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:06.058277 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:06.061368 systemd-logind[1172]: New session 23 of user core. Apr 12 18:51:06.062431 systemd[1]: Started session-23.scope. Apr 12 18:51:06.158608 sshd[3670]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:06.161806 systemd[1]: sshd@22-10.0.0.68:22-10.0.0.1:53412.service: Deactivated successfully. Apr 12 18:51:06.162562 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:51:06.163027 systemd-logind[1172]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:51:06.163722 systemd-logind[1172]: Removed session 23. Apr 12 18:51:11.164878 systemd[1]: Started sshd@23-10.0.0.68:22-10.0.0.1:44106.service. Apr 12 18:51:11.204845 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 44106 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:11.206366 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:11.212987 systemd-logind[1172]: New session 24 of user core. Apr 12 18:51:11.214177 systemd[1]: Started session-24.scope. Apr 12 18:51:11.341441 sshd[3685]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:11.345786 systemd[1]: sshd@23-10.0.0.68:22-10.0.0.1:44106.service: Deactivated successfully. Apr 12 18:51:11.346765 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:51:11.347426 systemd-logind[1172]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:51:11.349116 systemd[1]: Started sshd@24-10.0.0.68:22-10.0.0.1:44112.service. Apr 12 18:51:11.350273 systemd-logind[1172]: Removed session 24. Apr 12 18:51:11.385820 sshd[3698]: Accepted publickey for core from 10.0.0.1 port 44112 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:11.387497 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:11.391411 systemd-logind[1172]: New session 25 of user core. Apr 12 18:51:11.392242 systemd[1]: Started session-25.scope. Apr 12 18:51:13.196110 kubelet[2051]: I0412 18:51:13.196059 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kbs7q" podStartSLOduration=112.195934993 podStartE2EDuration="1m52.195934993s" podCreationTimestamp="2024-04-12 18:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:50:02.226516847 +0000 UTC m=+53.912362389" watchObservedRunningTime="2024-04-12 18:51:13.195934993 +0000 UTC m=+124.881780545" Apr 12 18:51:13.214781 systemd[1]: run-containerd-runc-k8s.io-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8-runc.ETtouQ.mount: Deactivated successfully. Apr 12 18:51:13.237174 env[1180]: time="2024-04-12T18:51:13.237111381Z" level=info msg="StopContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" with timeout 30 (s)" Apr 12 18:51:13.237967 env[1180]: time="2024-04-12T18:51:13.237940303Z" level=info msg="Stop container \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" with signal terminated" Apr 12 18:51:13.243809 env[1180]: time="2024-04-12T18:51:13.243731289Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:51:13.248836 systemd[1]: cri-containerd-bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672.scope: Deactivated successfully. Apr 12 18:51:13.250669 env[1180]: time="2024-04-12T18:51:13.250626226Z" level=info msg="StopContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" with timeout 2 (s)" Apr 12 18:51:13.250944 env[1180]: time="2024-04-12T18:51:13.250906865Z" level=info msg="Stop container \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" with signal terminated" Apr 12 18:51:13.260059 systemd-networkd[1083]: lxc_health: Link DOWN Apr 12 18:51:13.260068 systemd-networkd[1083]: lxc_health: Lost carrier Apr 12 18:51:13.268732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672-rootfs.mount: Deactivated successfully. Apr 12 18:51:13.290421 env[1180]: time="2024-04-12T18:51:13.290370043Z" level=info msg="shim disconnected" id=bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672 Apr 12 18:51:13.290725 env[1180]: time="2024-04-12T18:51:13.290681580Z" level=warning msg="cleaning up after shim disconnected" id=bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672 namespace=k8s.io Apr 12 18:51:13.290725 env[1180]: time="2024-04-12T18:51:13.290701408Z" level=info msg="cleaning up dead shim" Apr 12 18:51:13.298301 env[1180]: time="2024-04-12T18:51:13.298228707Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\n" Apr 12 18:51:13.304306 systemd[1]: cri-containerd-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8.scope: Deactivated successfully. Apr 12 18:51:13.304760 systemd[1]: cri-containerd-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8.scope: Consumed 11.191s CPU time. Apr 12 18:51:13.306535 env[1180]: time="2024-04-12T18:51:13.306482415Z" level=info msg="StopContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" returns successfully" Apr 12 18:51:13.307444 env[1180]: time="2024-04-12T18:51:13.307370580Z" level=info msg="StopPodSandbox for \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\"" Apr 12 18:51:13.307578 env[1180]: time="2024-04-12T18:51:13.307445831Z" level=info msg="Container to stop \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.309167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c-shm.mount: Deactivated successfully. Apr 12 18:51:13.316312 systemd[1]: cri-containerd-5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c.scope: Deactivated successfully. Apr 12 18:51:13.394105 env[1180]: time="2024-04-12T18:51:13.394038068Z" level=info msg="shim disconnected" id=5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c Apr 12 18:51:13.394105 env[1180]: time="2024-04-12T18:51:13.394091108Z" level=warning msg="cleaning up after shim disconnected" id=5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c namespace=k8s.io Apr 12 18:51:13.394105 env[1180]: time="2024-04-12T18:51:13.394100065Z" level=info msg="cleaning up dead shim" Apr 12 18:51:13.394480 env[1180]: time="2024-04-12T18:51:13.394422273Z" level=info msg="shim disconnected" id=07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8 Apr 12 18:51:13.394480 env[1180]: time="2024-04-12T18:51:13.394446167Z" level=warning msg="cleaning up after shim disconnected" id=07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8 namespace=k8s.io Apr 12 18:51:13.394480 env[1180]: time="2024-04-12T18:51:13.394453711Z" level=info msg="cleaning up dead shim" Apr 12 18:51:13.401712 env[1180]: time="2024-04-12T18:51:13.401649906Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Apr 12 18:51:13.402045 env[1180]: time="2024-04-12T18:51:13.402008913Z" level=info msg="TearDown network for sandbox \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\" successfully" Apr 12 18:51:13.402045 env[1180]: time="2024-04-12T18:51:13.402036015Z" level=info msg="StopPodSandbox for \"5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c\" returns successfully" Apr 12 18:51:13.406333 env[1180]: time="2024-04-12T18:51:13.406299461Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3798 runtime=io.containerd.runc.v2\n" Apr 12 18:51:13.516554 kubelet[2051]: I0412 18:51:13.516359 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-cilium-config-path\") pod \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\" (UID: \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\") " Apr 12 18:51:13.516554 kubelet[2051]: I0412 18:51:13.516434 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxrmt\" (UniqueName: \"kubernetes.io/projected/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-kube-api-access-zxrmt\") pod \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\" (UID: \"d0fb4859-8eba-4fcf-8555-87a9dbbba1f8\") " Apr 12 18:51:13.519738 env[1180]: time="2024-04-12T18:51:13.519677258Z" level=info msg="StopContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" returns successfully" Apr 12 18:51:13.520172 kubelet[2051]: I0412 18:51:13.519958 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" (UID: "d0fb4859-8eba-4fcf-8555-87a9dbbba1f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:51:13.520351 env[1180]: time="2024-04-12T18:51:13.520313818Z" level=info msg="StopPodSandbox for \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\"" Apr 12 18:51:13.520426 env[1180]: time="2024-04-12T18:51:13.520387627Z" level=info msg="Container to stop \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.520426 env[1180]: time="2024-04-12T18:51:13.520408416Z" level=info msg="Container to stop \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.520514 env[1180]: time="2024-04-12T18:51:13.520423505Z" level=info msg="Container to stop \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.520514 env[1180]: time="2024-04-12T18:51:13.520440327Z" level=info msg="Container to stop \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.520514 env[1180]: time="2024-04-12T18:51:13.520455586Z" level=info msg="Container to stop \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:51:13.521200 kubelet[2051]: I0412 18:51:13.521137 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-kube-api-access-zxrmt" (OuterVolumeSpecName: "kube-api-access-zxrmt") pod "d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" (UID: "d0fb4859-8eba-4fcf-8555-87a9dbbba1f8"). InnerVolumeSpecName "kube-api-access-zxrmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:51:13.529093 systemd[1]: cri-containerd-638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28.scope: Deactivated successfully. Apr 12 18:51:13.560372 env[1180]: time="2024-04-12T18:51:13.560279754Z" level=info msg="shim disconnected" id=638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28 Apr 12 18:51:13.560372 env[1180]: time="2024-04-12T18:51:13.560353193Z" level=warning msg="cleaning up after shim disconnected" id=638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28 namespace=k8s.io Apr 12 18:51:13.560372 env[1180]: time="2024-04-12T18:51:13.560366197Z" level=info msg="cleaning up dead shim" Apr 12 18:51:13.567799 env[1180]: time="2024-04-12T18:51:13.567739226Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" Apr 12 18:51:13.568166 env[1180]: time="2024-04-12T18:51:13.568141073Z" level=info msg="TearDown network for sandbox \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" successfully" Apr 12 18:51:13.568218 env[1180]: time="2024-04-12T18:51:13.568166501Z" level=info msg="StopPodSandbox for \"638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28\" returns successfully" Apr 12 18:51:13.617474 kubelet[2051]: I0412 18:51:13.617418 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.617474 kubelet[2051]: I0412 18:51:13.617453 2051 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zxrmt\" (UniqueName: \"kubernetes.io/projected/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8-kube-api-access-zxrmt\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.717992 kubelet[2051]: I0412 18:51:13.717930 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hubble-tls\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.717992 kubelet[2051]: I0412 18:51:13.717996 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cni-path\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.717992 kubelet[2051]: I0412 18:51:13.718020 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-kernel\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718039 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-lib-modules\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718057 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqbz4\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-kube-api-access-hqbz4\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718080 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-config-path\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718080 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718102 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-clustermesh-secrets\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718253 kubelet[2051]: I0412 18:51:13.718197 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-xtables-lock\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718218 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-bpf-maps\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718235 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-etc-cni-netd\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718250 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-cgroup\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718273 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hostproc\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718291 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-net\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718402 kubelet[2051]: I0412 18:51:13.718309 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-run\") pod \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\" (UID: \"4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0\") " Apr 12 18:51:13.718543 kubelet[2051]: I0412 18:51:13.718357 2051 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.718543 kubelet[2051]: I0412 18:51:13.718378 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718543 kubelet[2051]: I0412 18:51:13.718393 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718543 kubelet[2051]: I0412 18:51:13.718408 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718543 kubelet[2051]: I0412 18:51:13.718423 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718706 kubelet[2051]: I0412 18:51:13.718435 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718706 kubelet[2051]: I0412 18:51:13.718449 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718706 kubelet[2051]: I0412 18:51:13.718461 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718706 kubelet[2051]: I0412 18:51:13.718485 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.718706 kubelet[2051]: I0412 18:51:13.718501 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:13.720610 kubelet[2051]: I0412 18:51:13.720559 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:51:13.721288 kubelet[2051]: I0412 18:51:13.721246 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-kube-api-access-hqbz4" (OuterVolumeSpecName: "kube-api-access-hqbz4") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "kube-api-access-hqbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:51:13.721384 kubelet[2051]: I0412 18:51:13.721333 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:51:13.723208 kubelet[2051]: I0412 18:51:13.723093 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" (UID: "4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:51:13.772373 kubelet[2051]: E0412 18:51:13.772248 2051 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:51:13.819135 kubelet[2051]: I0412 18:51:13.819031 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819135 kubelet[2051]: I0412 18:51:13.819087 2051 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819135 kubelet[2051]: I0412 18:51:13.819108 2051 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819135 kubelet[2051]: I0412 18:51:13.819143 2051 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819160 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819175 2051 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hqbz4\" (UniqueName: \"kubernetes.io/projected/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-kube-api-access-hqbz4\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819188 2051 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819200 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819211 2051 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819249 2051 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819261 2051 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819485 kubelet[2051]: I0412 18:51:13.819272 2051 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:13.819781 kubelet[2051]: I0412 18:51:13.819286 2051 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:14.211029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8-rootfs.mount: Deactivated successfully. Apr 12 18:51:14.211179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28-rootfs.mount: Deactivated successfully. Apr 12 18:51:14.211260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-638b01a4c6b6374062c704e7f7e657fa333357ea48d4f19fd0310cc3d155eb28-shm.mount: Deactivated successfully. Apr 12 18:51:14.211338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f5ebaefa7740eca02b3e1d9fbff26093fab0e4a154d594a421fe13ea7dd748c-rootfs.mount: Deactivated successfully. Apr 12 18:51:14.211432 systemd[1]: var-lib-kubelet-pods-d0fb4859\x2d8eba\x2d4fcf\x2d8555\x2d87a9dbbba1f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzxrmt.mount: Deactivated successfully. Apr 12 18:51:14.211532 systemd[1]: var-lib-kubelet-pods-4e4e70a9\x2d9ae8\x2d4c42\x2d8c0c\x2dd95a8d6d38f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhqbz4.mount: Deactivated successfully. Apr 12 18:51:14.211607 systemd[1]: var-lib-kubelet-pods-4e4e70a9\x2d9ae8\x2d4c42\x2d8c0c\x2dd95a8d6d38f0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:51:14.211676 systemd[1]: var-lib-kubelet-pods-4e4e70a9\x2d9ae8\x2d4c42\x2d8c0c\x2dd95a8d6d38f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:51:14.339210 kubelet[2051]: I0412 18:51:14.339155 2051 scope.go:117] "RemoveContainer" containerID="07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8" Apr 12 18:51:14.341558 env[1180]: time="2024-04-12T18:51:14.341506648Z" level=info msg="RemoveContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\"" Apr 12 18:51:14.344338 systemd[1]: Removed slice kubepods-burstable-pod4e4e70a9_9ae8_4c42_8c0c_d95a8d6d38f0.slice. Apr 12 18:51:14.344445 systemd[1]: kubepods-burstable-pod4e4e70a9_9ae8_4c42_8c0c_d95a8d6d38f0.slice: Consumed 11.357s CPU time. Apr 12 18:51:14.348653 systemd[1]: Removed slice kubepods-besteffort-podd0fb4859_8eba_4fcf_8555_87a9dbbba1f8.slice. Apr 12 18:51:14.350245 env[1180]: time="2024-04-12T18:51:14.350181389Z" level=info msg="RemoveContainer for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" returns successfully" Apr 12 18:51:14.350613 kubelet[2051]: I0412 18:51:14.350556 2051 scope.go:117] "RemoveContainer" containerID="e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65" Apr 12 18:51:14.352168 env[1180]: time="2024-04-12T18:51:14.352122417Z" level=info msg="RemoveContainer for \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\"" Apr 12 18:51:14.356945 env[1180]: time="2024-04-12T18:51:14.356809071Z" level=info msg="RemoveContainer for \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\" returns successfully" Apr 12 18:51:14.357348 kubelet[2051]: I0412 18:51:14.357321 2051 scope.go:117] "RemoveContainer" containerID="558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee" Apr 12 18:51:14.359078 env[1180]: time="2024-04-12T18:51:14.358980885Z" level=info msg="RemoveContainer for \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\"" Apr 12 18:51:14.363498 env[1180]: time="2024-04-12T18:51:14.363429420Z" level=info msg="RemoveContainer for \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\" returns successfully" Apr 12 18:51:14.363762 kubelet[2051]: I0412 18:51:14.363714 2051 scope.go:117] "RemoveContainer" containerID="b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8" Apr 12 18:51:14.366546 env[1180]: time="2024-04-12T18:51:14.366503384Z" level=info msg="RemoveContainer for \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\"" Apr 12 18:51:14.371236 env[1180]: time="2024-04-12T18:51:14.371181502Z" level=info msg="RemoveContainer for \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\" returns successfully" Apr 12 18:51:14.372835 kubelet[2051]: I0412 18:51:14.371798 2051 scope.go:117] "RemoveContainer" containerID="b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba" Apr 12 18:51:14.373805 env[1180]: time="2024-04-12T18:51:14.373760313Z" level=info msg="RemoveContainer for \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\"" Apr 12 18:51:14.378316 env[1180]: time="2024-04-12T18:51:14.378268089Z" level=info msg="RemoveContainer for \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\" returns successfully" Apr 12 18:51:14.378542 kubelet[2051]: I0412 18:51:14.378504 2051 scope.go:117] "RemoveContainer" containerID="07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8" Apr 12 18:51:14.378965 env[1180]: time="2024-04-12T18:51:14.378827323Z" level=error msg="ContainerStatus for \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\": not found" Apr 12 18:51:14.379140 kubelet[2051]: E0412 18:51:14.379122 2051 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\": not found" containerID="07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8" Apr 12 18:51:14.379242 kubelet[2051]: I0412 18:51:14.379229 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8"} err="failed to get container status \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"07a3c02efb1ffe3ec9fb459350820faefac11bad9969f72e61bd2b0b960b87e8\": not found" Apr 12 18:51:14.379272 kubelet[2051]: I0412 18:51:14.379244 2051 scope.go:117] "RemoveContainer" containerID="e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65" Apr 12 18:51:14.379506 env[1180]: time="2024-04-12T18:51:14.379444607Z" level=error msg="ContainerStatus for \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\": not found" Apr 12 18:51:14.379631 kubelet[2051]: E0412 18:51:14.379604 2051 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\": not found" containerID="e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65" Apr 12 18:51:14.379713 kubelet[2051]: I0412 18:51:14.379639 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65"} err="failed to get container status \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4b621baac2925ae91f96b6af4385a4c3b3236a2ea505bd721fadc218a28cd65\": not found" Apr 12 18:51:14.379713 kubelet[2051]: I0412 18:51:14.379650 2051 scope.go:117] "RemoveContainer" containerID="558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee" Apr 12 18:51:14.379877 env[1180]: time="2024-04-12T18:51:14.379774829Z" level=error msg="ContainerStatus for \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\": not found" Apr 12 18:51:14.379943 kubelet[2051]: E0412 18:51:14.379892 2051 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\": not found" containerID="558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee" Apr 12 18:51:14.379943 kubelet[2051]: I0412 18:51:14.379916 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee"} err="failed to get container status \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"558354beefe11342c28f26a8ecf7810e1b20c78a9c4ca9fcfec7d4a0951742ee\": not found" Apr 12 18:51:14.379943 kubelet[2051]: I0412 18:51:14.379925 2051 scope.go:117] "RemoveContainer" containerID="b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8" Apr 12 18:51:14.380131 env[1180]: time="2024-04-12T18:51:14.380068513Z" level=error msg="ContainerStatus for \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\": not found" Apr 12 18:51:14.380279 kubelet[2051]: E0412 18:51:14.380258 2051 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\": not found" containerID="b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8" Apr 12 18:51:14.380279 kubelet[2051]: I0412 18:51:14.380280 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8"} err="failed to get container status \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8fcc365b17a78f16787b37e39d7a2e30929bac95b3cb516fa471a4e532c0ac8\": not found" Apr 12 18:51:14.380393 kubelet[2051]: I0412 18:51:14.380288 2051 scope.go:117] "RemoveContainer" containerID="b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba" Apr 12 18:51:14.380472 env[1180]: time="2024-04-12T18:51:14.380424073Z" level=error msg="ContainerStatus for \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\": not found" Apr 12 18:51:14.380563 kubelet[2051]: E0412 18:51:14.380548 2051 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\": not found" containerID="b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba" Apr 12 18:51:14.380608 kubelet[2051]: I0412 18:51:14.380569 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba"} err="failed to get container status \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6529f1f7e4a805fbfe42e4bf1716264848238b1cdf747915daf5a9ca30931ba\": not found" Apr 12 18:51:14.380608 kubelet[2051]: I0412 18:51:14.380577 2051 scope.go:117] "RemoveContainer" containerID="bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672" Apr 12 18:51:14.381528 env[1180]: time="2024-04-12T18:51:14.381496394Z" level=info msg="RemoveContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\"" Apr 12 18:51:14.385234 env[1180]: time="2024-04-12T18:51:14.385183775Z" level=info msg="RemoveContainer for \"bab1e8283f8a0d84a04928549115e899f4be70cc8ee7c3f8d5275da46b45d672\" returns successfully" Apr 12 18:51:14.618926 kubelet[2051]: I0412 18:51:14.618872 2051 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" path="/var/lib/kubelet/pods/4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0/volumes" Apr 12 18:51:14.620262 kubelet[2051]: I0412 18:51:14.620214 2051 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" path="/var/lib/kubelet/pods/d0fb4859-8eba-4fcf-8555-87a9dbbba1f8/volumes" Apr 12 18:51:15.039360 sshd[3698]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:15.043237 systemd[1]: sshd@24-10.0.0.68:22-10.0.0.1:44112.service: Deactivated successfully. Apr 12 18:51:15.043982 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:51:15.044636 systemd-logind[1172]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:51:15.046205 systemd[1]: Started sshd@25-10.0.0.68:22-10.0.0.1:44128.service. Apr 12 18:51:15.047185 systemd-logind[1172]: Removed session 25. Apr 12 18:51:15.085213 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 44128 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:15.086846 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:15.091964 systemd-logind[1172]: New session 26 of user core. Apr 12 18:51:15.093359 systemd[1]: Started session-26.scope. Apr 12 18:51:15.613516 kubelet[2051]: E0412 18:51:15.613124 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-57n7x" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" Apr 12 18:51:15.942099 sshd[3857]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:15.945238 systemd[1]: Started sshd@26-10.0.0.68:22-10.0.0.1:44140.service. Apr 12 18:51:15.946579 systemd[1]: sshd@25-10.0.0.68:22-10.0.0.1:44128.service: Deactivated successfully. Apr 12 18:51:15.949226 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:51:15.950836 systemd-logind[1172]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:51:15.952322 systemd-logind[1172]: Removed session 26. Apr 12 18:51:15.970750 kubelet[2051]: I0412 18:51:15.970675 2051 topology_manager.go:215] "Topology Admit Handler" podUID="e3056373-91bd-43e2-80fd-9bb19e636075" podNamespace="kube-system" podName="cilium-5mncq" Apr 12 18:51:15.970750 kubelet[2051]: E0412 18:51:15.970752 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" containerName="cilium-operator" Apr 12 18:51:15.971053 kubelet[2051]: E0412 18:51:15.970798 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="cilium-agent" Apr 12 18:51:15.971053 kubelet[2051]: E0412 18:51:15.970810 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="mount-cgroup" Apr 12 18:51:15.971053 kubelet[2051]: E0412 18:51:15.970818 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="apply-sysctl-overwrites" Apr 12 18:51:15.971053 kubelet[2051]: E0412 18:51:15.970826 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="mount-bpf-fs" Apr 12 18:51:15.971053 kubelet[2051]: E0412 18:51:15.970834 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="clean-cilium-state" Apr 12 18:51:15.971053 kubelet[2051]: I0412 18:51:15.970877 2051 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4e70a9-9ae8-4c42-8c0c-d95a8d6d38f0" containerName="cilium-agent" Apr 12 18:51:15.971053 kubelet[2051]: I0412 18:51:15.970891 2051 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fb4859-8eba-4fcf-8555-87a9dbbba1f8" containerName="cilium-operator" Apr 12 18:51:15.979983 systemd[1]: Created slice kubepods-burstable-pode3056373_91bd_43e2_80fd_9bb19e636075.slice. Apr 12 18:51:15.984001 kubelet[2051]: W0412 18:51:15.983962 2051 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.984001 kubelet[2051]: E0412 18:51:15.984000 2051 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.984001 kubelet[2051]: W0412 18:51:15.983962 2051 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.984001 kubelet[2051]: E0412 18:51:15.984017 2051 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.984318 kubelet[2051]: W0412 18:51:15.984290 2051 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.984318 kubelet[2051]: E0412 18:51:15.984310 2051 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:15.990505 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 44140 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:15.994020 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:15.999686 systemd[1]: Started session-27.scope. Apr 12 18:51:16.000142 systemd-logind[1172]: New session 27 of user core. Apr 12 18:51:16.033695 kubelet[2051]: I0412 18:51:16.033630 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-lib-modules\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.033695 kubelet[2051]: I0412 18:51:16.033678 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-config-path\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.033695 kubelet[2051]: I0412 18:51:16.033708 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjzj\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-kube-api-access-thjzj\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.033777 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-bpf-maps\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.033840 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-cgroup\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.033895 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-run\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.033926 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-clustermesh-secrets\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.033990 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-ipsec-secrets\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034109 kubelet[2051]: I0412 18:51:16.034035 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-etc-cni-netd\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034071 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-xtables-lock\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034169 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-hostproc\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034194 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cni-path\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034218 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-net\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034253 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-kernel\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.034320 kubelet[2051]: I0412 18:51:16.034271 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-hubble-tls\") pod \"cilium-5mncq\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " pod="kube-system/cilium-5mncq" Apr 12 18:51:16.133580 sshd[3867]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:16.136289 systemd[1]: Started sshd@27-10.0.0.68:22-10.0.0.1:44146.service. Apr 12 18:51:16.140874 kubelet[2051]: E0412 18:51:16.140560 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls kube-api-access-thjzj], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5mncq" podUID="e3056373-91bd-43e2-80fd-9bb19e636075" Apr 12 18:51:16.142390 systemd[1]: sshd@26-10.0.0.68:22-10.0.0.1:44140.service: Deactivated successfully. Apr 12 18:51:16.143487 systemd[1]: session-27.scope: Deactivated successfully. Apr 12 18:51:16.148463 systemd-logind[1172]: Session 27 logged out. Waiting for processes to exit. Apr 12 18:51:16.149947 systemd-logind[1172]: Removed session 27. Apr 12 18:51:16.177329 sshd[3880]: Accepted publickey for core from 10.0.0.1 port 44146 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:51:16.178906 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:51:16.183399 systemd-logind[1172]: New session 28 of user core. Apr 12 18:51:16.184869 systemd[1]: Started session-28.scope. Apr 12 18:51:16.437562 kubelet[2051]: I0412 18:51:16.437496 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thjzj\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-kube-api-access-thjzj\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437562 kubelet[2051]: I0412 18:51:16.437550 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-kernel\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437562 kubelet[2051]: I0412 18:51:16.437569 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cni-path\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437591 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-net\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437615 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-config-path\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437632 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-etc-cni-netd\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437649 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-xtables-lock\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437667 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-run\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.437796 kubelet[2051]: I0412 18:51:16.437683 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-lib-modules\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.438012 kubelet[2051]: I0412 18:51:16.437674 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438012 kubelet[2051]: I0412 18:51:16.437698 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-bpf-maps\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.438012 kubelet[2051]: I0412 18:51:16.437717 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-cgroup\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.438012 kubelet[2051]: I0412 18:51:16.437733 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-hostproc\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.438012 kubelet[2051]: I0412 18:51:16.437728 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437754 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437781 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437801 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437818 2051 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437830 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438142 kubelet[2051]: I0412 18:51:16.437839 2051 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.438274 kubelet[2051]: I0412 18:51:16.437845 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438274 kubelet[2051]: I0412 18:51:16.437884 2051 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.438274 kubelet[2051]: I0412 18:51:16.437816 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438274 kubelet[2051]: I0412 18:51:16.437918 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.438274 kubelet[2051]: I0412 18:51:16.437960 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:51:16.439556 kubelet[2051]: I0412 18:51:16.439526 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:51:16.440936 kubelet[2051]: I0412 18:51:16.440891 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-kube-api-access-thjzj" (OuterVolumeSpecName: "kube-api-access-thjzj") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "kube-api-access-thjzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:51:16.441981 systemd[1]: var-lib-kubelet-pods-e3056373\x2d91bd\x2d43e2\x2d80fd\x2d9bb19e636075-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthjzj.mount: Deactivated successfully. Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538559 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538605 2051 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538617 2051 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-thjzj\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-kube-api-access-thjzj\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538626 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538637 2051 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538645 2051 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538654 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.538637 kubelet[2051]: I0412 18:51:16.538664 2051 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.539216 kubelet[2051]: I0412 18:51:16.538672 2051 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3056373-91bd-43e2-80fd-9bb19e636075-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:16.619557 systemd[1]: Removed slice kubepods-burstable-pode3056373_91bd_43e2_80fd_9bb19e636075.slice. Apr 12 18:51:16.941160 kubelet[2051]: I0412 18:51:16.941105 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-ipsec-secrets\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:16.944477 kubelet[2051]: I0412 18:51:16.944414 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:51:16.945324 systemd[1]: var-lib-kubelet-pods-e3056373\x2d91bd\x2d43e2\x2d80fd\x2d9bb19e636075-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:51:17.041454 kubelet[2051]: I0412 18:51:17.041382 2051 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:17.243201 kubelet[2051]: I0412 18:51:17.242899 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-clustermesh-secrets\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:17.243201 kubelet[2051]: I0412 18:51:17.242980 2051 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-hubble-tls\") pod \"e3056373-91bd-43e2-80fd-9bb19e636075\" (UID: \"e3056373-91bd-43e2-80fd-9bb19e636075\") " Apr 12 18:51:17.246165 kubelet[2051]: I0412 18:51:17.246102 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:51:17.246995 kubelet[2051]: I0412 18:51:17.246957 2051 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3056373-91bd-43e2-80fd-9bb19e636075" (UID: "e3056373-91bd-43e2-80fd-9bb19e636075"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:51:17.247553 systemd[1]: var-lib-kubelet-pods-e3056373\x2d91bd\x2d43e2\x2d80fd\x2d9bb19e636075-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:51:17.247700 systemd[1]: var-lib-kubelet-pods-e3056373\x2d91bd\x2d43e2\x2d80fd\x2d9bb19e636075-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:51:17.343636 kubelet[2051]: I0412 18:51:17.343559 2051 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3056373-91bd-43e2-80fd-9bb19e636075-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:17.343636 kubelet[2051]: I0412 18:51:17.343618 2051 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3056373-91bd-43e2-80fd-9bb19e636075-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:51:17.399314 kubelet[2051]: I0412 18:51:17.399248 2051 topology_manager.go:215] "Topology Admit Handler" podUID="75332ef4-ab32-4a89-93e1-e36862dcae12" podNamespace="kube-system" podName="cilium-7mr5c" Apr 12 18:51:17.401656 kubelet[2051]: W0412 18:51:17.401614 2051 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.401937 kubelet[2051]: E0412 18:51:17.401917 2051 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.402362 kubelet[2051]: W0412 18:51:17.402343 2051 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.402476 kubelet[2051]: E0412 18:51:17.402457 2051 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.403315 kubelet[2051]: W0412 18:51:17.403294 2051 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.403434 kubelet[2051]: E0412 18:51:17.403415 2051 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 12 18:51:17.406601 systemd[1]: Created slice kubepods-burstable-pod75332ef4_ab32_4a89_93e1_e36862dcae12.slice. Apr 12 18:51:17.444259 kubelet[2051]: I0412 18:51:17.444205 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-hostproc\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444493 kubelet[2051]: I0412 18:51:17.444467 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-host-proc-sys-kernel\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444493 kubelet[2051]: I0412 18:51:17.444499 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrwl\" (UniqueName: \"kubernetes.io/projected/75332ef4-ab32-4a89-93e1-e36862dcae12-kube-api-access-bqrwl\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444516 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-bpf-maps\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444532 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-xtables-lock\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444548 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75332ef4-ab32-4a89-93e1-e36862dcae12-clustermesh-secrets\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444564 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-cni-path\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444579 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-lib-modules\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444646 kubelet[2051]: I0412 18:51:17.444595 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-cgroup\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444627 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-run\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444643 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-config-path\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444660 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-ipsec-secrets\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444678 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75332ef4-ab32-4a89-93e1-e36862dcae12-hubble-tls\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444694 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-host-proc-sys-net\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.444826 kubelet[2051]: I0412 18:51:17.444709 2051 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75332ef4-ab32-4a89-93e1-e36862dcae12-etc-cni-netd\") pod \"cilium-7mr5c\" (UID: \"75332ef4-ab32-4a89-93e1-e36862dcae12\") " pod="kube-system/cilium-7mr5c" Apr 12 18:51:17.613329 kubelet[2051]: E0412 18:51:17.613240 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-57n7x" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" Apr 12 18:51:18.548140 kubelet[2051]: E0412 18:51:18.548073 2051 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Apr 12 18:51:18.548592 kubelet[2051]: E0412 18:51:18.548201 2051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-ipsec-secrets podName:75332ef4-ab32-4a89-93e1-e36862dcae12 nodeName:}" failed. No retries permitted until 2024-04-12 18:51:19.048175132 +0000 UTC m=+130.734020694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/75332ef4-ab32-4a89-93e1-e36862dcae12-cilium-ipsec-secrets") pod "cilium-7mr5c" (UID: "75332ef4-ab32-4a89-93e1-e36862dcae12") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:51:18.616056 kubelet[2051]: I0412 18:51:18.615989 2051 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e3056373-91bd-43e2-80fd-9bb19e636075" path="/var/lib/kubelet/pods/e3056373-91bd-43e2-80fd-9bb19e636075/volumes" Apr 12 18:51:18.773458 kubelet[2051]: E0412 18:51:18.773407 2051 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:51:19.210692 kubelet[2051]: E0412 18:51:19.210616 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:19.211338 env[1180]: time="2024-04-12T18:51:19.211276561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mr5c,Uid:75332ef4-ab32-4a89-93e1-e36862dcae12,Namespace:kube-system,Attempt:0,}" Apr 12 18:51:19.481603 env[1180]: time="2024-04-12T18:51:19.481379579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:51:19.481603 env[1180]: time="2024-04-12T18:51:19.481432179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:51:19.481603 env[1180]: time="2024-04-12T18:51:19.481444192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:51:19.482225 env[1180]: time="2024-04-12T18:51:19.482103313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a pid=3914 runtime=io.containerd.runc.v2 Apr 12 18:51:19.501631 systemd[1]: Started cri-containerd-f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a.scope. Apr 12 18:51:19.525988 env[1180]: time="2024-04-12T18:51:19.525928150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mr5c,Uid:75332ef4-ab32-4a89-93e1-e36862dcae12,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\"" Apr 12 18:51:19.526717 kubelet[2051]: E0412 18:51:19.526685 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:19.529403 env[1180]: time="2024-04-12T18:51:19.529314982Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:51:19.613568 kubelet[2051]: E0412 18:51:19.613494 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-57n7x" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" Apr 12 18:51:19.836318 env[1180]: time="2024-04-12T18:51:19.836216606Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892\"" Apr 12 18:51:19.837594 env[1180]: time="2024-04-12T18:51:19.836971027Z" level=info msg="StartContainer for \"a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892\"" Apr 12 18:51:19.852835 systemd[1]: Started cri-containerd-a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892.scope. Apr 12 18:51:19.878938 env[1180]: time="2024-04-12T18:51:19.878823197Z" level=info msg="StartContainer for \"a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892\" returns successfully" Apr 12 18:51:19.887245 systemd[1]: cri-containerd-a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892.scope: Deactivated successfully. Apr 12 18:51:19.914250 env[1180]: time="2024-04-12T18:51:19.914190114Z" level=info msg="shim disconnected" id=a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892 Apr 12 18:51:19.914250 env[1180]: time="2024-04-12T18:51:19.914246481Z" level=warning msg="cleaning up after shim disconnected" id=a8d3b1eb776ccca885123ef0a7be555271083a305f80829bc0fcd24c8b8fc892 namespace=k8s.io Apr 12 18:51:19.914250 env[1180]: time="2024-04-12T18:51:19.914255037Z" level=info msg="cleaning up dead shim" Apr 12 18:51:19.920316 env[1180]: time="2024-04-12T18:51:19.920266275Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" Apr 12 18:51:20.359800 kubelet[2051]: E0412 18:51:20.359770 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:20.361280 env[1180]: time="2024-04-12T18:51:20.361244552Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:51:20.378756 env[1180]: time="2024-04-12T18:51:20.378686871Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52\"" Apr 12 18:51:20.379897 env[1180]: time="2024-04-12T18:51:20.379376440Z" level=info msg="StartContainer for \"8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52\"" Apr 12 18:51:20.396252 systemd[1]: Started cri-containerd-8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52.scope. Apr 12 18:51:20.423123 env[1180]: time="2024-04-12T18:51:20.423044157Z" level=info msg="StartContainer for \"8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52\" returns successfully" Apr 12 18:51:20.427777 systemd[1]: cri-containerd-8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52.scope: Deactivated successfully. Apr 12 18:51:20.447976 env[1180]: time="2024-04-12T18:51:20.447916015Z" level=info msg="shim disconnected" id=8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52 Apr 12 18:51:20.447976 env[1180]: time="2024-04-12T18:51:20.447970308Z" level=warning msg="cleaning up after shim disconnected" id=8c1f95f42d9a11817fab23c3b368cb1e5a9d6b4a475facf3470695467bb42b52 namespace=k8s.io Apr 12 18:51:20.447976 env[1180]: time="2024-04-12T18:51:20.447984254Z" level=info msg="cleaning up dead shim" Apr 12 18:51:20.454971 env[1180]: time="2024-04-12T18:51:20.454915114Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Apr 12 18:51:21.363795 kubelet[2051]: E0412 18:51:21.363763 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:21.366311 env[1180]: time="2024-04-12T18:51:21.366262392Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:51:21.386227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113448614.mount: Deactivated successfully. Apr 12 18:51:21.393233 env[1180]: time="2024-04-12T18:51:21.393164123Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed\"" Apr 12 18:51:21.395239 env[1180]: time="2024-04-12T18:51:21.393848362Z" level=info msg="StartContainer for \"3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed\"" Apr 12 18:51:21.411956 systemd[1]: Started cri-containerd-3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed.scope. Apr 12 18:51:21.423126 kubelet[2051]: I0412 18:51:21.423089 2051 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T18:51:21Z","lastTransitionTime":"2024-04-12T18:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 18:51:21.448751 env[1180]: time="2024-04-12T18:51:21.447259896Z" level=info msg="StartContainer for \"3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed\" returns successfully" Apr 12 18:51:21.449477 systemd[1]: cri-containerd-3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed.scope: Deactivated successfully. Apr 12 18:51:21.475368 env[1180]: time="2024-04-12T18:51:21.475320440Z" level=info msg="shim disconnected" id=3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed Apr 12 18:51:21.475368 env[1180]: time="2024-04-12T18:51:21.475371316Z" level=warning msg="cleaning up after shim disconnected" id=3e35aa84734c7410723f0e76484084bb063a1a84ea696c0ee3da048154c0b6ed namespace=k8s.io Apr 12 18:51:21.475609 env[1180]: time="2024-04-12T18:51:21.475380133Z" level=info msg="cleaning up dead shim" Apr 12 18:51:21.482692 env[1180]: time="2024-04-12T18:51:21.482629603Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4122 runtime=io.containerd.runc.v2\n" Apr 12 18:51:21.613220 kubelet[2051]: E0412 18:51:21.613168 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-57n7x" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" Apr 12 18:51:22.366509 kubelet[2051]: E0412 18:51:22.366480 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:22.368195 env[1180]: time="2024-04-12T18:51:22.368162658Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:51:22.380354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411482593.mount: Deactivated successfully. Apr 12 18:51:22.383038 env[1180]: time="2024-04-12T18:51:22.382959639Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa\"" Apr 12 18:51:22.383463 env[1180]: time="2024-04-12T18:51:22.383436458Z" level=info msg="StartContainer for \"13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa\"" Apr 12 18:51:22.403432 systemd[1]: Started cri-containerd-13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa.scope. Apr 12 18:51:22.426883 systemd[1]: cri-containerd-13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa.scope: Deactivated successfully. Apr 12 18:51:22.428394 env[1180]: time="2024-04-12T18:51:22.428356919Z" level=info msg="StartContainer for \"13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa\" returns successfully" Apr 12 18:51:22.449320 env[1180]: time="2024-04-12T18:51:22.449262740Z" level=info msg="shim disconnected" id=13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa Apr 12 18:51:22.449320 env[1180]: time="2024-04-12T18:51:22.449310990Z" level=warning msg="cleaning up after shim disconnected" id=13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa namespace=k8s.io Apr 12 18:51:22.449320 env[1180]: time="2024-04-12T18:51:22.449319636Z" level=info msg="cleaning up dead shim" Apr 12 18:51:22.455482 env[1180]: time="2024-04-12T18:51:22.455430912Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:51:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Apr 12 18:51:22.476227 systemd[1]: run-containerd-runc-k8s.io-13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa-runc.uZvWdG.mount: Deactivated successfully. Apr 12 18:51:22.476338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b006bf7c47e6a3a1001113faca353f710553bf8df0b3c1822f74b90f9106aa-rootfs.mount: Deactivated successfully. Apr 12 18:51:23.371285 kubelet[2051]: E0412 18:51:23.371245 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:23.374473 env[1180]: time="2024-04-12T18:51:23.374022481Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:51:23.394994 env[1180]: time="2024-04-12T18:51:23.394924674Z" level=info msg="CreateContainer within sandbox \"f6a68c652f77766319b8912323502f7a92dc63044615b1690b3c9f1d4de1ea8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407\"" Apr 12 18:51:23.395647 env[1180]: time="2024-04-12T18:51:23.395607130Z" level=info msg="StartContainer for \"84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407\"" Apr 12 18:51:23.413202 systemd[1]: Started cri-containerd-84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407.scope. Apr 12 18:51:23.455145 env[1180]: time="2024-04-12T18:51:23.455074095Z" level=info msg="StartContainer for \"84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407\" returns successfully" Apr 12 18:51:23.613186 kubelet[2051]: E0412 18:51:23.613145 2051 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-57n7x" podUID="0330692a-ed83-4bca-8f95-8f7034e452b0" Apr 12 18:51:23.758893 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:51:24.376680 kubelet[2051]: E0412 18:51:24.376643 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:25.377934 kubelet[2051]: E0412 18:51:25.377886 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:25.614116 kubelet[2051]: E0412 18:51:25.614077 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:26.502945 systemd-networkd[1083]: lxc_health: Link UP Apr 12 18:51:26.519376 systemd-networkd[1083]: lxc_health: Gained carrier Apr 12 18:51:26.519942 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:51:27.213355 kubelet[2051]: E0412 18:51:27.213310 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:27.233623 kubelet[2051]: I0412 18:51:27.233356 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7mr5c" podStartSLOduration=10.233304029 podStartE2EDuration="10.233304029s" podCreationTimestamp="2024-04-12 18:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:51:24.392459107 +0000 UTC m=+136.078304639" watchObservedRunningTime="2024-04-12 18:51:27.233304029 +0000 UTC m=+138.919149571" Apr 12 18:51:27.383075 kubelet[2051]: E0412 18:51:27.383035 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:28.113135 systemd-networkd[1083]: lxc_health: Gained IPv6LL Apr 12 18:51:28.384695 kubelet[2051]: E0412 18:51:28.384496 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:51:28.760089 systemd[1]: run-containerd-runc-k8s.io-84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407-runc.qt1RTV.mount: Deactivated successfully. Apr 12 18:51:30.883678 systemd[1]: run-containerd-runc-k8s.io-84b1dc70983f45eecb5935bfe665d0e953f8b1f38e5e05b9183dfa33db817407-runc.BSI2QW.mount: Deactivated successfully. Apr 12 18:51:33.051897 sshd[3880]: pam_unix(sshd:session): session closed for user core Apr 12 18:51:33.054544 systemd[1]: sshd@27-10.0.0.68:22-10.0.0.1:44146.service: Deactivated successfully. Apr 12 18:51:33.055247 systemd[1]: session-28.scope: Deactivated successfully. Apr 12 18:51:33.055933 systemd-logind[1172]: Session 28 logged out. Waiting for processes to exit. Apr 12 18:51:33.056659 systemd-logind[1172]: Removed session 28. Apr 12 18:51:34.613843 kubelet[2051]: E0412 18:51:34.613793 2051 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"