Apr 12 18:57:11.797567 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:57:11.797585 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:57:11.797593 kernel: BIOS-provided physical RAM map: Apr 12 18:57:11.797599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 12 18:57:11.797604 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 12 18:57:11.797610 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 12 18:57:11.797616 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Apr 12 18:57:11.797622 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Apr 12 18:57:11.797628 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 12 18:57:11.797634 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 12 18:57:11.797639 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 12 18:57:11.797645 kernel: NX (Execute Disable) protection: active Apr 12 18:57:11.797650 kernel: SMBIOS 2.8 present. Apr 12 18:57:11.797656 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 12 18:57:11.797664 kernel: Hypervisor detected: KVM Apr 12 18:57:11.797670 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:57:11.797675 kernel: kvm-clock: cpu 0, msr 41191001, primary cpu clock Apr 12 18:57:11.797681 kernel: kvm-clock: using sched offset of 2353987227 cycles Apr 12 18:57:11.797688 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:57:11.797694 kernel: tsc: Detected 2794.748 MHz processor Apr 12 18:57:11.797700 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:57:11.797706 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:57:11.797712 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Apr 12 18:57:11.797720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:57:11.797726 kernel: Using GB pages for direct mapping Apr 12 18:57:11.797732 kernel: ACPI: Early table checksum verification disabled Apr 12 18:57:11.797738 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Apr 12 18:57:11.797744 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797750 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797756 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797762 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 12 18:57:11.797767 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797775 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797781 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 12 18:57:11.797787 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Apr 12 18:57:11.797793 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Apr 12 18:57:11.797799 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 12 18:57:11.797804 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Apr 12 18:57:11.797810 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Apr 12 18:57:11.797816 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Apr 12 18:57:11.797826 kernel: No NUMA configuration found Apr 12 18:57:11.797832 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Apr 12 18:57:11.797839 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Apr 12 18:57:11.797845 kernel: Zone ranges: Apr 12 18:57:11.797852 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:57:11.797858 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Apr 12 18:57:11.797866 kernel: Normal empty Apr 12 18:57:11.797872 kernel: Movable zone start for each node Apr 12 18:57:11.797886 kernel: Early memory node ranges Apr 12 18:57:11.797892 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 12 18:57:11.797899 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Apr 12 18:57:11.797905 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Apr 12 18:57:11.797911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:57:11.797918 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 12 18:57:11.797924 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Apr 12 18:57:11.797932 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 12 18:57:11.797938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:57:11.797944 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:57:11.797951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 12 18:57:11.797957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:57:11.797964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:57:11.797970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:57:11.797976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:57:11.797983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:57:11.797990 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:57:11.797996 kernel: TSC deadline timer available Apr 12 18:57:11.798003 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 12 18:57:11.798009 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 12 18:57:11.798016 kernel: kvm-guest: setup PV sched yield Apr 12 18:57:11.798023 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Apr 12 18:57:11.798029 kernel: Booting paravirtualized kernel on KVM Apr 12 18:57:11.798036 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:57:11.798042 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Apr 12 18:57:11.798049 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Apr 12 18:57:11.798056 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Apr 12 18:57:11.798062 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 12 18:57:11.798077 kernel: kvm-guest: setup async PF for cpu 0 Apr 12 18:57:11.798084 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Apr 12 18:57:11.798090 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:57:11.798097 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:57:11.798103 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Apr 12 18:57:11.798109 kernel: Policy zone: DMA32 Apr 12 18:57:11.798117 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:57:11.798125 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:57:11.798131 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:57:11.798138 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:57:11.798145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:57:11.798151 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 134792K reserved, 0K cma-reserved) Apr 12 18:57:11.798158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 12 18:57:11.798164 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:57:11.798171 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:57:11.798178 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:57:11.798200 kernel: rcu: RCU event tracing is enabled. Apr 12 18:57:11.798210 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 12 18:57:11.798217 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:57:11.798223 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:57:11.798230 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:57:11.798237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 12 18:57:11.798245 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 12 18:57:11.798252 kernel: random: crng init done Apr 12 18:57:11.798262 kernel: Console: colour VGA+ 80x25 Apr 12 18:57:11.798269 kernel: printk: console [ttyS0] enabled Apr 12 18:57:11.798275 kernel: ACPI: Core revision 20210730 Apr 12 18:57:11.798282 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 12 18:57:11.798288 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:57:11.798295 kernel: x2apic enabled Apr 12 18:57:11.798301 kernel: Switched APIC routing to physical x2apic. Apr 12 18:57:11.798307 kernel: kvm-guest: setup PV IPIs Apr 12 18:57:11.798313 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 12 18:57:11.798321 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 12 18:57:11.798327 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 12 18:57:11.798334 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 12 18:57:11.798340 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 12 18:57:11.798346 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 12 18:57:11.798353 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:57:11.798359 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:57:11.798370 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:57:11.798376 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:57:11.798389 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 12 18:57:11.798396 kernel: RETBleed: Mitigation: untrained return thunk Apr 12 18:57:11.798402 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:57:11.798410 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:57:11.798417 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:57:11.798424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:57:11.798431 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:57:11.798437 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:57:11.798444 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:57:11.798452 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:57:11.798459 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:57:11.798466 kernel: LSM: Security Framework initializing Apr 12 18:57:11.798472 kernel: SELinux: Initializing. Apr 12 18:57:11.798479 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:57:11.798486 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:57:11.798493 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 12 18:57:11.798501 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 12 18:57:11.798507 kernel: ... version: 0 Apr 12 18:57:11.798514 kernel: ... bit width: 48 Apr 12 18:57:11.798521 kernel: ... generic registers: 6 Apr 12 18:57:11.798527 kernel: ... value mask: 0000ffffffffffff Apr 12 18:57:11.798534 kernel: ... max period: 00007fffffffffff Apr 12 18:57:11.798540 kernel: ... fixed-purpose events: 0 Apr 12 18:57:11.798547 kernel: ... event mask: 000000000000003f Apr 12 18:57:11.798554 kernel: signal: max sigframe size: 1776 Apr 12 18:57:11.798560 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:57:11.798569 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:57:11.798575 kernel: x86: Booting SMP configuration: Apr 12 18:57:11.798582 kernel: .... node #0, CPUs: #1 Apr 12 18:57:11.798589 kernel: kvm-clock: cpu 1, msr 41191041, secondary cpu clock Apr 12 18:57:11.798595 kernel: kvm-guest: setup async PF for cpu 1 Apr 12 18:57:11.798602 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Apr 12 18:57:11.798609 kernel: #2 Apr 12 18:57:11.798615 kernel: kvm-clock: cpu 2, msr 41191081, secondary cpu clock Apr 12 18:57:11.798622 kernel: kvm-guest: setup async PF for cpu 2 Apr 12 18:57:11.798630 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Apr 12 18:57:11.798637 kernel: #3 Apr 12 18:57:11.798643 kernel: kvm-clock: cpu 3, msr 411910c1, secondary cpu clock Apr 12 18:57:11.798650 kernel: kvm-guest: setup async PF for cpu 3 Apr 12 18:57:11.798656 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Apr 12 18:57:11.798663 kernel: smp: Brought up 1 node, 4 CPUs Apr 12 18:57:11.798670 kernel: smpboot: Max logical packages: 1 Apr 12 18:57:11.798676 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 12 18:57:11.798683 kernel: devtmpfs: initialized Apr 12 18:57:11.798691 kernel: x86/mm: Memory block size: 128MB Apr 12 18:57:11.798698 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:57:11.798705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 12 18:57:11.798711 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:57:11.798718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:57:11.798725 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:57:11.798732 kernel: audit: type=2000 audit(1712948231.993:1): state=initialized audit_enabled=0 res=1 Apr 12 18:57:11.798738 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:57:11.798745 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:57:11.798753 kernel: cpuidle: using governor menu Apr 12 18:57:11.798759 kernel: ACPI: bus type PCI registered Apr 12 18:57:11.798766 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:57:11.798773 kernel: dca service started, version 1.12.1 Apr 12 18:57:11.798779 kernel: PCI: Using configuration type 1 for base access Apr 12 18:57:11.798786 kernel: PCI: Using configuration type 1 for extended access Apr 12 18:57:11.798793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:57:11.798800 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:57:11.798807 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:57:11.798815 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:57:11.798821 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:57:11.798829 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:57:11.798835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:57:11.798842 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:57:11.798848 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:57:11.798855 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:57:11.798862 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:57:11.798868 kernel: ACPI: Interpreter enabled Apr 12 18:57:11.798875 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:57:11.798890 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:57:11.798897 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:57:11.798904 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 12 18:57:11.798911 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:57:11.799029 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:57:11.799041 kernel: acpiphp: Slot [3] registered Apr 12 18:57:11.799048 kernel: acpiphp: Slot [4] registered Apr 12 18:57:11.799055 kernel: acpiphp: Slot [5] registered Apr 12 18:57:11.799063 kernel: acpiphp: Slot [6] registered Apr 12 18:57:11.799080 kernel: acpiphp: Slot [7] registered Apr 12 18:57:11.799100 kernel: acpiphp: Slot [8] registered Apr 12 18:57:11.799107 kernel: acpiphp: Slot [9] registered Apr 12 18:57:11.799114 kernel: acpiphp: Slot [10] registered Apr 12 18:57:11.799121 kernel: acpiphp: Slot [11] registered Apr 12 18:57:11.799127 kernel: acpiphp: Slot [12] registered Apr 12 18:57:11.799134 kernel: acpiphp: Slot [13] registered Apr 12 18:57:11.799140 kernel: acpiphp: Slot [14] registered Apr 12 18:57:11.799149 kernel: acpiphp: Slot [15] registered Apr 12 18:57:11.799156 kernel: acpiphp: Slot [16] registered Apr 12 18:57:11.799162 kernel: acpiphp: Slot [17] registered Apr 12 18:57:11.799169 kernel: acpiphp: Slot [18] registered Apr 12 18:57:11.799175 kernel: acpiphp: Slot [19] registered Apr 12 18:57:11.799182 kernel: acpiphp: Slot [20] registered Apr 12 18:57:11.799189 kernel: acpiphp: Slot [21] registered Apr 12 18:57:11.799195 kernel: acpiphp: Slot [22] registered Apr 12 18:57:11.799202 kernel: acpiphp: Slot [23] registered Apr 12 18:57:11.799208 kernel: acpiphp: Slot [24] registered Apr 12 18:57:11.799216 kernel: acpiphp: Slot [25] registered Apr 12 18:57:11.799223 kernel: acpiphp: Slot [26] registered Apr 12 18:57:11.799229 kernel: acpiphp: Slot [27] registered Apr 12 18:57:11.799236 kernel: acpiphp: Slot [28] registered Apr 12 18:57:11.799243 kernel: acpiphp: Slot [29] registered Apr 12 18:57:11.799249 kernel: acpiphp: Slot [30] registered Apr 12 18:57:11.799256 kernel: acpiphp: Slot [31] registered Apr 12 18:57:11.799262 kernel: PCI host bridge to bus 0000:00 Apr 12 18:57:11.799344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:57:11.799410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:57:11.799472 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:57:11.799532 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Apr 12 18:57:11.799593 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Apr 12 18:57:11.799653 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:57:11.799735 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:57:11.799822 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:57:11.799920 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Apr 12 18:57:11.799990 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Apr 12 18:57:11.800058 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Apr 12 18:57:11.800169 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Apr 12 18:57:11.800238 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Apr 12 18:57:11.800304 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Apr 12 18:57:11.800383 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:57:11.800450 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Apr 12 18:57:11.800517 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Apr 12 18:57:11.800589 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Apr 12 18:57:11.800659 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 12 18:57:11.800725 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 12 18:57:11.800796 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 12 18:57:11.800863 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:57:11.800947 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:57:11.801018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:57:11.801113 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 12 18:57:11.801185 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 12 18:57:11.801263 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 12 18:57:11.801337 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 12 18:57:11.801404 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 12 18:57:11.801472 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 12 18:57:11.801546 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:57:11.801615 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Apr 12 18:57:11.801683 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 12 18:57:11.801751 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 12 18:57:11.801823 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 12 18:57:11.801832 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:57:11.801840 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:57:11.801846 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:57:11.801853 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:57:11.801860 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:57:11.801867 kernel: iommu: Default domain type: Translated Apr 12 18:57:11.801873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:57:11.801949 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Apr 12 18:57:11.802019 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:57:11.802114 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Apr 12 18:57:11.802124 kernel: vgaarb: loaded Apr 12 18:57:11.802131 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:57:11.802138 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:57:11.802145 kernel: PTP clock support registered Apr 12 18:57:11.802151 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:57:11.802158 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:57:11.802165 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 12 18:57:11.802175 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Apr 12 18:57:11.802181 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 12 18:57:11.802188 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 12 18:57:11.802195 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:57:11.802202 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:57:11.802208 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:57:11.802215 kernel: pnp: PnP ACPI init Apr 12 18:57:11.802299 kernel: pnp 00:02: [dma 2] Apr 12 18:57:11.802311 kernel: pnp: PnP ACPI: found 6 devices Apr 12 18:57:11.802318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:57:11.802325 kernel: NET: Registered PF_INET protocol family Apr 12 18:57:11.802332 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:57:11.802339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:57:11.802345 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:57:11.802352 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:57:11.802359 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:57:11.802368 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:57:11.802374 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:57:11.802381 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:57:11.802388 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:57:11.802395 kernel: NET: Registered PF_XDP protocol family Apr 12 18:57:11.802456 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:57:11.802515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:57:11.802574 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:57:11.802633 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Apr 12 18:57:11.802695 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Apr 12 18:57:11.802765 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Apr 12 18:57:11.802835 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:57:11.802913 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:57:11.802923 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:57:11.802930 kernel: Initialise system trusted keyrings Apr 12 18:57:11.802937 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:57:11.802943 kernel: Key type asymmetric registered Apr 12 18:57:11.802952 kernel: Asymmetric key parser 'x509' registered Apr 12 18:57:11.802959 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:57:11.802966 kernel: io scheduler mq-deadline registered Apr 12 18:57:11.802973 kernel: io scheduler kyber registered Apr 12 18:57:11.802980 kernel: io scheduler bfq registered Apr 12 18:57:11.802986 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:57:11.802993 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:57:11.803000 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:57:11.803007 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:57:11.803015 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:57:11.803022 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:57:11.803029 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:57:11.803036 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:57:11.803042 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:57:11.803128 kernel: rtc_cmos 00:05: RTC can wake from S4 Apr 12 18:57:11.803138 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:57:11.803200 kernel: rtc_cmos 00:05: registered as rtc0 Apr 12 18:57:11.803268 kernel: rtc_cmos 00:05: setting system clock to 2024-04-12T18:57:11 UTC (1712948231) Apr 12 18:57:11.803331 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 12 18:57:11.803340 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:57:11.803347 kernel: Segment Routing with IPv6 Apr 12 18:57:11.803354 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:57:11.803361 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:57:11.803368 kernel: Key type dns_resolver registered Apr 12 18:57:11.803374 kernel: IPI shorthand broadcast: enabled Apr 12 18:57:11.803381 kernel: sched_clock: Marking stable (398598975, 96981803)->(536904800, -41324022) Apr 12 18:57:11.803390 kernel: registered taskstats version 1 Apr 12 18:57:11.803396 kernel: Loading compiled-in X.509 certificates Apr 12 18:57:11.803404 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:57:11.803410 kernel: Key type .fscrypt registered Apr 12 18:57:11.803417 kernel: Key type fscrypt-provisioning registered Apr 12 18:57:11.803424 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:57:11.803431 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:57:11.803438 kernel: ima: No architecture policies found Apr 12 18:57:11.803445 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:57:11.803453 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:57:11.803460 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:57:11.803466 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:57:11.803473 kernel: Run /init as init process Apr 12 18:57:11.803480 kernel: with arguments: Apr 12 18:57:11.803487 kernel: /init Apr 12 18:57:11.803493 kernel: with environment: Apr 12 18:57:11.803510 kernel: HOME=/ Apr 12 18:57:11.803519 kernel: TERM=linux Apr 12 18:57:11.803527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:57:11.803536 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:57:11.803545 systemd[1]: Detected virtualization kvm. Apr 12 18:57:11.803553 systemd[1]: Detected architecture x86-64. Apr 12 18:57:11.803560 systemd[1]: Running in initrd. Apr 12 18:57:11.803567 systemd[1]: No hostname configured, using default hostname. Apr 12 18:57:11.803575 systemd[1]: Hostname set to . Apr 12 18:57:11.803583 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:57:11.803591 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:57:11.803598 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:57:11.803605 systemd[1]: Reached target cryptsetup.target. Apr 12 18:57:11.803613 systemd[1]: Reached target paths.target. Apr 12 18:57:11.803620 systemd[1]: Reached target slices.target. Apr 12 18:57:11.803627 systemd[1]: Reached target swap.target. Apr 12 18:57:11.803634 systemd[1]: Reached target timers.target. Apr 12 18:57:11.803644 systemd[1]: Listening on iscsid.socket. Apr 12 18:57:11.803651 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:57:11.803659 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:57:11.803666 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:57:11.803674 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:57:11.803681 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:57:11.803689 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:57:11.803696 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:57:11.803705 systemd[1]: Reached target sockets.target. Apr 12 18:57:11.803713 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:57:11.803720 systemd[1]: Finished network-cleanup.service. Apr 12 18:57:11.803728 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:57:11.803736 systemd[1]: Starting systemd-journald.service... Apr 12 18:57:11.803743 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:57:11.803753 systemd[1]: Starting systemd-resolved.service... Apr 12 18:57:11.803760 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:57:11.803768 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:57:11.803775 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:57:11.803783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:57:11.803797 systemd-journald[198]: Journal started Apr 12 18:57:11.803834 systemd-journald[198]: Runtime Journal (/run/log/journal/2f2228a829314743833acc2707999a61) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:57:11.793991 systemd-modules-load[199]: Inserted module 'overlay' Apr 12 18:57:11.837201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:57:11.837215 kernel: Bridge firewalling registered Apr 12 18:57:11.837228 systemd[1]: Started systemd-journald.service. Apr 12 18:57:11.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.817471 systemd-resolved[200]: Positive Trust Anchors: Apr 12 18:57:11.845279 kernel: audit: type=1130 audit(1712948231.837:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.845294 kernel: audit: type=1130 audit(1712948231.840:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.845303 kernel: audit: type=1130 audit(1712948231.845:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.817481 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:57:11.855308 kernel: SCSI subsystem initialized Apr 12 18:57:11.855324 kernel: audit: type=1130 audit(1712948231.849:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.817508 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:57:11.819704 systemd-resolved[200]: Defaulting to hostname 'linux'. Apr 12 18:57:11.825538 systemd-modules-load[199]: Inserted module 'br_netfilter' Apr 12 18:57:11.837302 systemd[1]: Started systemd-resolved.service. Apr 12 18:57:11.841010 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:57:11.868165 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:57:11.868178 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:57:11.868187 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:57:11.845407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:57:11.849270 systemd[1]: Reached target nss-lookup.target. Apr 12 18:57:11.853012 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:57:11.875120 kernel: audit: type=1130 audit(1712948231.869:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.868301 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:57:11.870692 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:57:11.874243 systemd-modules-load[199]: Inserted module 'dm_multipath' Apr 12 18:57:11.881447 kernel: audit: type=1130 audit(1712948231.877:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.881535 dracut-cmdline[216]: dracut-dracut-053 Apr 12 18:57:11.881535 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:57:11.875998 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:57:11.888018 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:57:11.894821 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:57:11.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.900117 kernel: audit: type=1130 audit(1712948231.896:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:11.935102 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:57:11.951094 kernel: iscsi: registered transport (tcp) Apr 12 18:57:11.972102 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:57:11.972124 kernel: QLogic iSCSI HBA Driver Apr 12 18:57:11.999739 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:57:12.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.002135 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:57:12.005613 kernel: audit: type=1130 audit(1712948232.001:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.047095 kernel: raid6: avx2x4 gen() 30312 MB/s Apr 12 18:57:12.064096 kernel: raid6: avx2x4 xor() 7464 MB/s Apr 12 18:57:12.081128 kernel: raid6: avx2x2 gen() 32408 MB/s Apr 12 18:57:12.098094 kernel: raid6: avx2x2 xor() 19253 MB/s Apr 12 18:57:12.115093 kernel: raid6: avx2x1 gen() 26604 MB/s Apr 12 18:57:12.132093 kernel: raid6: avx2x1 xor() 15385 MB/s Apr 12 18:57:12.149100 kernel: raid6: sse2x4 gen() 14745 MB/s Apr 12 18:57:12.166104 kernel: raid6: sse2x4 xor() 7103 MB/s Apr 12 18:57:12.183098 kernel: raid6: sse2x2 gen() 16223 MB/s Apr 12 18:57:12.200096 kernel: raid6: sse2x2 xor() 9848 MB/s Apr 12 18:57:12.217106 kernel: raid6: sse2x1 gen() 12461 MB/s Apr 12 18:57:12.234475 kernel: raid6: sse2x1 xor() 7833 MB/s Apr 12 18:57:12.234489 kernel: raid6: using algorithm avx2x2 gen() 32408 MB/s Apr 12 18:57:12.234498 kernel: raid6: .... xor() 19253 MB/s, rmw enabled Apr 12 18:57:12.235195 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:57:12.247095 kernel: xor: automatically using best checksumming function avx Apr 12 18:57:12.335095 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:57:12.343214 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:57:12.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.347000 audit: BPF prog-id=7 op=LOAD Apr 12 18:57:12.347000 audit: BPF prog-id=8 op=LOAD Apr 12 18:57:12.348099 kernel: audit: type=1130 audit(1712948232.344:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.348137 systemd[1]: Starting systemd-udevd.service... Apr 12 18:57:12.359512 systemd-udevd[401]: Using default interface naming scheme 'v252'. Apr 12 18:57:12.363238 systemd[1]: Started systemd-udevd.service. Apr 12 18:57:12.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.366314 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:57:12.375582 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 12 18:57:12.400445 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:57:12.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.402676 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:57:12.434889 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:57:12.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:12.461106 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:57:12.463160 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 12 18:57:12.478670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:57:12.478692 kernel: GPT:9289727 != 19775487 Apr 12 18:57:12.478707 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:57:12.478715 kernel: GPT:9289727 != 19775487 Apr 12 18:57:12.478723 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:57:12.478731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:57:12.478741 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:57:12.482094 kernel: AES CTR mode by8 optimization enabled Apr 12 18:57:12.485720 kernel: libata version 3.00 loaded. Apr 12 18:57:12.488102 kernel: ata_piix 0000:00:01.1: version 2.13 Apr 12 18:57:12.490132 kernel: scsi host0: ata_piix Apr 12 18:57:12.490247 kernel: scsi host1: ata_piix Apr 12 18:57:12.490330 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Apr 12 18:57:12.490342 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Apr 12 18:57:12.492084 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) Apr 12 18:57:12.496496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:57:12.526237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:57:12.533968 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:57:12.540984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:57:12.545875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:57:12.548417 systemd[1]: Starting disk-uuid.service... Apr 12 18:57:12.556881 disk-uuid[516]: Primary Header is updated. Apr 12 18:57:12.556881 disk-uuid[516]: Secondary Entries is updated. Apr 12 18:57:12.556881 disk-uuid[516]: Secondary Header is updated. Apr 12 18:57:12.560451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:57:12.563093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:57:12.566093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:57:12.651106 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 12 18:57:12.653189 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 12 18:57:12.683331 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 12 18:57:12.683539 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:57:12.701098 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 12 18:57:13.565568 disk-uuid[518]: The operation has completed successfully. Apr 12 18:57:13.566921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 12 18:57:13.586562 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:57:13.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.586641 systemd[1]: Finished disk-uuid.service. Apr 12 18:57:13.590662 systemd[1]: Starting verity-setup.service... Apr 12 18:57:13.603091 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 12 18:57:13.620203 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:57:13.622186 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:57:13.623976 systemd[1]: Finished verity-setup.service. Apr 12 18:57:13.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.679094 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:57:13.679348 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:57:13.680207 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:57:13.680804 systemd[1]: Starting ignition-setup.service... Apr 12 18:57:13.683355 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:57:13.689488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:57:13.689513 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:57:13.689522 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:57:13.697051 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:57:13.704957 systemd[1]: Finished ignition-setup.service. Apr 12 18:57:13.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.707386 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:57:13.739969 ignition[635]: Ignition 2.14.0 Apr 12 18:57:13.739981 ignition[635]: Stage: fetch-offline Apr 12 18:57:13.740023 ignition[635]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:13.740031 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:13.740130 ignition[635]: parsed url from cmdline: "" Apr 12 18:57:13.740133 ignition[635]: no config URL provided Apr 12 18:57:13.740137 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:57:13.740143 ignition[635]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:57:13.740162 ignition[635]: op(1): [started] loading QEMU firmware config module Apr 12 18:57:13.740167 ignition[635]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 12 18:57:13.750178 ignition[635]: op(1): [finished] loading QEMU firmware config module Apr 12 18:57:13.753311 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:57:13.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.755000 audit: BPF prog-id=9 op=LOAD Apr 12 18:57:13.755814 systemd[1]: Starting systemd-networkd.service... Apr 12 18:57:13.827283 ignition[635]: parsing config with SHA512: 4b8ba42d6326d3e2d447b15099e45b7c204c514e94f571c98faa5c36f8b3c00c3e295d7b7e6b686a9656ac63a10b92cba84fe7c6a09c0790b99c544cf9ce18ee Apr 12 18:57:13.843601 systemd-networkd[713]: lo: Link UP Apr 12 18:57:13.843611 systemd-networkd[713]: lo: Gained carrier Apr 12 18:57:13.845324 systemd-networkd[713]: Enumeration completed Apr 12 18:57:13.845398 systemd[1]: Started systemd-networkd.service. Apr 12 18:57:13.846929 systemd[1]: Reached target network.target. Apr 12 18:57:13.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.848048 systemd[1]: Starting iscsiuio.service... Apr 12 18:57:13.849011 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:57:13.851815 systemd[1]: Started iscsiuio.service. Apr 12 18:57:13.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.853333 systemd[1]: Starting iscsid.service... Apr 12 18:57:13.854958 systemd-networkd[713]: eth0: Link UP Apr 12 18:57:13.854964 systemd-networkd[713]: eth0: Gained carrier Apr 12 18:57:13.856891 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:57:13.856891 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:57:13.856891 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:57:13.856891 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:57:13.856891 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:57:13.856891 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:57:13.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.867372 ignition[635]: fetch-offline: fetch-offline passed Apr 12 18:57:13.858382 systemd[1]: Started iscsid.service. Apr 12 18:57:13.867428 ignition[635]: Ignition finished successfully Apr 12 18:57:13.866754 unknown[635]: fetched base config from "system" Apr 12 18:57:13.866760 unknown[635]: fetched user config from "qemu" Apr 12 18:57:13.872904 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:57:13.874592 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:57:13.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.876356 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 12 18:57:13.877355 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:57:13.879748 systemd[1]: Starting ignition-kargs.service... Apr 12 18:57:13.882770 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:57:13.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.884433 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:57:13.886048 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:57:13.887720 systemd[1]: Reached target remote-fs.target. Apr 12 18:57:13.887920 ignition[726]: Ignition 2.14.0 Apr 12 18:57:13.887924 ignition[726]: Stage: kargs Apr 12 18:57:13.887998 ignition[726]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:13.888007 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:13.889114 ignition[726]: kargs: kargs passed Apr 12 18:57:13.889147 ignition[726]: Ignition finished successfully Apr 12 18:57:13.893981 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:57:13.895618 systemd[1]: Finished ignition-kargs.service. Apr 12 18:57:13.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.897796 systemd[1]: Starting ignition-disks.service... Apr 12 18:57:13.901565 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:57:13.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.903481 ignition[735]: Ignition 2.14.0 Apr 12 18:57:13.903490 ignition[735]: Stage: disks Apr 12 18:57:13.903565 ignition[735]: no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:13.903574 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:13.904640 ignition[735]: disks: disks passed Apr 12 18:57:13.904672 ignition[735]: Ignition finished successfully Apr 12 18:57:13.908493 systemd[1]: Finished ignition-disks.service. Apr 12 18:57:13.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.910061 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:57:13.910975 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:57:13.911785 systemd[1]: Reached target local-fs.target. Apr 12 18:57:13.913377 systemd[1]: Reached target sysinit.target. Apr 12 18:57:13.914147 systemd[1]: Reached target basic.target. Apr 12 18:57:13.916368 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:57:13.925294 systemd-fsck[747]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:57:13.930128 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:57:13.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.932227 systemd[1]: Mounting sysroot.mount... Apr 12 18:57:13.937936 systemd[1]: Mounted sysroot.mount. Apr 12 18:57:13.940140 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:57:13.938688 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:57:13.940937 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:57:13.941885 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:57:13.941913 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:57:13.941931 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:57:13.943779 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:57:13.945892 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:57:13.952505 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:57:13.956620 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:57:13.959092 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:57:13.962508 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:57:13.984012 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:57:13.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:13.984894 systemd[1]: Starting ignition-mount.service... Apr 12 18:57:13.986261 systemd[1]: Starting sysroot-boot.service... Apr 12 18:57:13.991494 bash[799]: umount: /sysroot/usr/share/oem: not mounted. Apr 12 18:57:13.998634 ignition[800]: INFO : Ignition 2.14.0 Apr 12 18:57:13.998634 ignition[800]: INFO : Stage: mount Apr 12 18:57:14.000157 ignition[800]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:14.000157 ignition[800]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:14.000157 ignition[800]: INFO : mount: mount passed Apr 12 18:57:14.000157 ignition[800]: INFO : Ignition finished successfully Apr 12 18:57:14.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:14.002488 systemd[1]: Finished sysroot-boot.service. Apr 12 18:57:14.005715 systemd[1]: Finished ignition-mount.service. Apr 12 18:57:14.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:14.630967 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:57:14.636090 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Apr 12 18:57:14.638142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:57:14.638162 kernel: BTRFS info (device vda6): using free space tree Apr 12 18:57:14.638171 kernel: BTRFS info (device vda6): has skinny extents Apr 12 18:57:14.641564 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:57:14.642501 systemd[1]: Starting ignition-files.service... Apr 12 18:57:14.654456 ignition[828]: INFO : Ignition 2.14.0 Apr 12 18:57:14.654456 ignition[828]: INFO : Stage: files Apr 12 18:57:14.655997 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:14.655997 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:14.659175 ignition[828]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:57:14.660438 ignition[828]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:57:14.660438 ignition[828]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:57:14.663568 ignition[828]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:57:14.665121 ignition[828]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:57:14.665121 ignition[828]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:57:14.665121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:57:14.665121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:57:14.664289 unknown[828]: wrote ssh authorized keys file for user: core Apr 12 18:57:14.797404 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:57:14.853832 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:57:14.855890 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:57:14.855890 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:57:14.953178 systemd-networkd[713]: eth0: Gained IPv6LL Apr 12 18:57:15.225545 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:57:15.309681 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:57:15.312538 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:57:15.314365 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:57:15.316209 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:57:15.587316 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:57:15.724876 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:57:15.727990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:57:15.727990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:57:15.727990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:57:15.727990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:57:15.727990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:57:15.801572 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:57:15.961525 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Apr 12 18:57:15.961525 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:57:15.966101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:57:15.966101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:57:16.016462 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:57:16.803624 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Apr 12 18:57:16.806785 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:57:16.806785 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:57:16.806785 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:57:16.859785 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:57:17.017031 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Apr 12 18:57:17.020010 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:57:17.020010 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:57:17.020010 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:57:17.285214 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:57:17.377101 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:57:17.377101 ignition[828]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:57:17.377101 ignition[828]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:57:17.377101 ignition[828]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:57:17.377101 ignition[828]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:57:17.377101 ignition[828]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Apr 12 18:57:17.408230 ignition[828]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:57:17.464757 kernel: kauditd_printk_skb: 23 callbacks suppressed Apr 12 18:57:17.464783 kernel: audit: type=1130 audit(1712948237.411:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.464795 kernel: audit: type=1130 audit(1712948237.422:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.464805 kernel: audit: type=1130 audit(1712948237.427:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.464815 kernel: audit: type=1131 audit(1712948237.427:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.464825 kernel: audit: type=1130 audit(1712948237.452:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.464834 kernel: audit: type=1131 audit(1712948237.452:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.465004 ignition[828]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 12 18:57:17.465004 ignition[828]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Apr 12 18:57:17.465004 ignition[828]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:57:17.465004 ignition[828]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:57:17.465004 ignition[828]: INFO : files: files passed Apr 12 18:57:17.465004 ignition[828]: INFO : Ignition finished successfully Apr 12 18:57:17.410149 systemd[1]: Finished ignition-files.service. Apr 12 18:57:17.412486 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:57:17.475030 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Apr 12 18:57:17.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.418021 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:57:17.482695 kernel: audit: type=1130 audit(1712948237.477:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.482713 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:57:17.418901 systemd[1]: Starting ignition-quench.service... Apr 12 18:57:17.420206 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:57:17.422439 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:57:17.422500 systemd[1]: Finished ignition-quench.service. Apr 12 18:57:17.428081 systemd[1]: Reached target ignition-complete.target. Apr 12 18:57:17.437016 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:57:17.450969 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:57:17.451034 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:57:17.452683 systemd[1]: Reached target initrd-fs.target. Apr 12 18:57:17.460391 systemd[1]: Reached target initrd.target. Apr 12 18:57:17.499878 kernel: audit: type=1131 audit(1712948237.495:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.462571 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:57:17.532431 kernel: audit: type=1131 audit(1712948237.500:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.532446 kernel: audit: type=1131 audit(1712948237.502:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.463172 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:57:17.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.475676 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:57:17.535698 ignition[869]: INFO : Ignition 2.14.0 Apr 12 18:57:17.535698 ignition[869]: INFO : Stage: umount Apr 12 18:57:17.535698 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 12 18:57:17.535698 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 12 18:57:17.535698 ignition[869]: INFO : umount: umount passed Apr 12 18:57:17.535698 ignition[869]: INFO : Ignition finished successfully Apr 12 18:57:17.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.477869 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:57:17.542000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:57:17.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.489573 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:57:17.490622 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:57:17.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.492499 systemd[1]: Stopped target timers.target. Apr 12 18:57:17.494079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:57:17.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.494142 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:57:17.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.495668 systemd[1]: Stopped target initrd.target. Apr 12 18:57:17.499899 systemd[1]: Stopped target basic.target. Apr 12 18:57:17.500354 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:57:17.500512 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:57:17.500674 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:57:17.500853 systemd[1]: Stopped target remote-fs.target. Apr 12 18:57:17.501014 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:57:17.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.501356 systemd[1]: Stopped target sysinit.target. Apr 12 18:57:17.501518 systemd[1]: Stopped target local-fs.target. Apr 12 18:57:17.501680 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:57:17.501851 systemd[1]: Stopped target swap.target. Apr 12 18:57:17.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.502023 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:57:17.502067 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:57:17.502229 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:57:17.505482 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:57:17.505525 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:57:17.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.505691 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:57:17.505737 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:57:17.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.508977 systemd[1]: Stopped target paths.target. Apr 12 18:57:17.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.509275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:57:17.513119 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:57:17.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.513439 systemd[1]: Stopped target slices.target. Apr 12 18:57:17.513604 systemd[1]: Stopped target sockets.target. Apr 12 18:57:17.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:17.513803 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:57:17.513831 systemd[1]: Closed iscsid.socket. Apr 12 18:57:17.513989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:57:17.514027 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:57:17.514311 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:57:17.514347 systemd[1]: Stopped ignition-files.service. Apr 12 18:57:17.515289 systemd[1]: Stopping ignition-mount.service... Apr 12 18:57:17.515726 systemd[1]: Stopping iscsiuio.service... Apr 12 18:57:17.516375 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:57:17.516639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:57:17.516689 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:57:17.516858 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:57:17.516896 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:57:17.517417 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:57:17.517504 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:57:17.519068 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:57:17.519166 systemd[1]: Stopped iscsiuio.service. Apr 12 18:57:17.519535 systemd[1]: Stopped target network.target. Apr 12 18:57:17.519619 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:57:17.519644 systemd[1]: Closed iscsiuio.socket. Apr 12 18:57:17.519861 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:57:17.519994 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:57:17.526672 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:57:17.526749 systemd[1]: Stopped ignition-mount.service. Apr 12 18:57:17.526994 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:57:17.527025 systemd[1]: Stopped ignition-disks.service. Apr 12 18:57:17.527290 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:57:17.527314 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:57:17.527452 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:57:17.527476 systemd[1]: Stopped ignition-setup.service. Apr 12 18:57:17.532542 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:57:17.532615 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:57:17.535177 systemd-networkd[713]: eth0: DHCPv6 lease lost Apr 12 18:57:17.611000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:57:17.536558 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:57:17.536872 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:57:17.536941 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:57:17.541331 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:57:17.541396 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:57:17.542944 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:57:17.542967 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:57:17.544366 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:57:17.621991 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Apr 12 18:57:17.622016 iscsid[719]: iscsid shutting down. Apr 12 18:57:17.544396 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:57:17.545904 systemd[1]: Stopping network-cleanup.service... Apr 12 18:57:17.546741 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:57:17.546795 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:57:17.548605 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:57:17.548650 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:57:17.550289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:57:17.550321 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:57:17.552178 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:57:17.554560 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:57:17.557895 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:57:17.557967 systemd[1]: Stopped network-cleanup.service. Apr 12 18:57:17.561739 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:57:17.561882 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:57:17.565108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:57:17.565150 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:57:17.566715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:57:17.566762 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:57:17.568304 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:57:17.568350 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:57:17.570123 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:57:17.570167 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:57:17.571633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:57:17.571674 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:57:17.574257 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:57:17.575106 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:57:17.575150 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:57:17.576862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:57:17.576896 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:57:17.578721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:57:17.578773 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:57:17.581155 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 12 18:57:17.581491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:57:17.581552 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:57:17.582876 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:57:17.585230 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:57:17.600945 systemd[1]: Switching root. Apr 12 18:57:17.632504 systemd-journald[198]: Journal stopped Apr 12 18:57:20.881920 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:57:20.881965 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:57:20.881976 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:57:20.881986 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:57:20.881998 kernel: SELinux: policy capability open_perms=1 Apr 12 18:57:20.882007 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:57:20.882016 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:57:20.882027 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:57:20.882037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:57:20.882050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:57:20.882059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:57:20.882096 systemd[1]: Successfully loaded SELinux policy in 48.052ms. Apr 12 18:57:20.882117 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.369ms. Apr 12 18:57:20.882128 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:57:20.882138 systemd[1]: Detected virtualization kvm. Apr 12 18:57:20.882148 systemd[1]: Detected architecture x86-64. Apr 12 18:57:20.882159 systemd[1]: Detected first boot. Apr 12 18:57:20.882170 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:57:20.882180 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:57:20.882189 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:57:20.882200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:57:20.882211 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:57:20.882222 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:57:20.882241 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:57:20.882251 systemd[1]: Stopped iscsid.service. Apr 12 18:57:20.882261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:57:20.882271 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:57:20.882282 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:57:20.882292 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:57:20.882302 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:57:20.882315 systemd[1]: Created slice system-getty.slice. Apr 12 18:57:20.882326 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:57:20.882336 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:57:20.882346 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:57:20.882356 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:57:20.882366 systemd[1]: Created slice user.slice. Apr 12 18:57:20.882377 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:57:20.882387 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:57:20.882397 systemd[1]: Set up automount boot.automount. Apr 12 18:57:20.882408 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:57:20.882419 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:57:20.882429 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:57:20.882439 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:57:20.882448 systemd[1]: Reached target integritysetup.target. Apr 12 18:57:20.882459 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:57:20.882469 systemd[1]: Reached target remote-fs.target. Apr 12 18:57:20.882479 systemd[1]: Reached target slices.target. Apr 12 18:57:20.882490 systemd[1]: Reached target swap.target. Apr 12 18:57:20.882501 systemd[1]: Reached target torcx.target. Apr 12 18:57:20.882511 systemd[1]: Reached target veritysetup.target. Apr 12 18:57:20.882522 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:57:20.882532 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:57:20.882542 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:57:20.882552 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:57:20.882562 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:57:20.882572 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:57:20.882582 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:57:20.882592 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:57:20.882602 systemd[1]: Mounting media.mount... Apr 12 18:57:20.882612 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:57:20.882622 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:57:20.882632 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:57:20.882649 systemd[1]: Mounting tmp.mount... Apr 12 18:57:20.882659 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:57:20.882669 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:57:20.882680 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:57:20.882690 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:57:20.882701 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:57:20.882711 systemd[1]: Starting modprobe@drm.service... Apr 12 18:57:20.882721 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:57:20.882731 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:57:20.882741 systemd[1]: Starting modprobe@loop.service... Apr 12 18:57:20.882752 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:57:20.882761 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:57:20.882771 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:57:20.882781 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:57:20.882792 kernel: loop: module loaded Apr 12 18:57:20.882802 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:57:20.882813 systemd[1]: Stopped systemd-journald.service. Apr 12 18:57:20.882823 kernel: fuse: init (API version 7.34) Apr 12 18:57:20.882833 systemd[1]: Starting systemd-journald.service... Apr 12 18:57:20.882843 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:57:20.882853 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:57:20.882863 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:57:20.882873 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:57:20.882885 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:57:20.882896 systemd[1]: Stopped verity-setup.service. Apr 12 18:57:20.882907 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:57:20.882919 systemd-journald[988]: Journal started Apr 12 18:57:20.882960 systemd-journald[988]: Runtime Journal (/run/log/journal/2f2228a829314743833acc2707999a61) is 6.0M, max 48.5M, 42.5M free. Apr 12 18:57:17.692000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:57:18.689000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:57:18.689000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:57:18.689000 audit: BPF prog-id=10 op=LOAD Apr 12 18:57:18.689000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:57:18.689000 audit: BPF prog-id=11 op=LOAD Apr 12 18:57:18.689000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:57:18.718000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:57:18.718000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b2 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:57:18.718000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:57:18.719000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:57:18.719000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5989 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:57:18.719000 audit: CWD cwd="/" Apr 12 18:57:18.719000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:18.719000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:18.719000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:57:20.754000 audit: BPF prog-id=12 op=LOAD Apr 12 18:57:20.754000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:57:20.754000 audit: BPF prog-id=13 op=LOAD Apr 12 18:57:20.754000 audit: BPF prog-id=14 op=LOAD Apr 12 18:57:20.755000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:57:20.755000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:57:20.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.772000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:57:20.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.863000 audit: BPF prog-id=15 op=LOAD Apr 12 18:57:20.863000 audit: BPF prog-id=16 op=LOAD Apr 12 18:57:20.863000 audit: BPF prog-id=17 op=LOAD Apr 12 18:57:20.863000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:57:20.863000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:57:20.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.880000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:57:20.880000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc5d3c1660 a2=4000 a3=7ffc5d3c16fc items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:57:20.880000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:57:18.716345 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:57:20.753003 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:57:20.884828 systemd[1]: Started systemd-journald.service. Apr 12 18:57:20.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:18.716652 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:57:20.753013 systemd[1]: Unnecessary job was removed for dev-vda6.device. Apr 12 18:57:18.716672 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:57:20.755734 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:57:18.716710 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:57:20.884905 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:57:18.716720 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:57:20.885738 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:57:18.716755 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:57:20.886531 systemd[1]: Mounted media.mount. Apr 12 18:57:18.716770 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:57:20.887270 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:57:18.716952 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:57:20.888167 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:57:18.716987 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:57:20.889046 systemd[1]: Mounted tmp.mount. Apr 12 18:57:20.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:18.716999 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:57:20.889960 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:57:18.717620 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:57:18.717654 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:57:20.891119 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:57:20.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:18.717671 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:57:18.717685 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:57:18.717709 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:57:20.892203 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:57:18.717722 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:57:20.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.892384 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:57:20.500733 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:57:20.500983 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:57:20.501068 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:57:20.893510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:57:20.501235 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:57:20.501282 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:57:20.501333 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-04-12T18:57:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:57:20.893678 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:57:20.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.894756 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:57:20.894926 systemd[1]: Finished modprobe@drm.service. Apr 12 18:57:20.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.895905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:57:20.896064 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:57:20.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.897159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:57:20.897370 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:57:20.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.898339 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:57:20.898492 systemd[1]: Finished modprobe@loop.service. Apr 12 18:57:20.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.899603 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:57:20.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.900771 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:57:20.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.901956 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:57:20.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.903174 systemd[1]: Reached target network-pre.target. Apr 12 18:57:20.905079 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:57:20.906846 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:57:20.907584 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:57:20.908799 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:57:20.910469 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:57:20.911369 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:57:20.912279 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:57:20.915858 systemd-journald[988]: Time spent on flushing to /var/log/journal/2f2228a829314743833acc2707999a61 is 18.725ms for 1124 entries. Apr 12 18:57:20.915858 systemd-journald[988]: System Journal (/var/log/journal/2f2228a829314743833acc2707999a61) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:57:20.948190 systemd-journald[988]: Received client request to flush runtime journal. Apr 12 18:57:20.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.916196 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:57:20.917220 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:57:20.919438 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:57:20.921876 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:57:20.948870 udevadm[1007]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 18:57:20.922879 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:57:20.925372 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:57:20.926441 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:57:20.928186 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:57:20.930212 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:57:20.933709 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:57:20.936913 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:57:20.938693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:57:20.948982 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:57:20.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:20.956008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:57:20.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.338940 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:57:21.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.340000 audit: BPF prog-id=18 op=LOAD Apr 12 18:57:21.340000 audit: BPF prog-id=19 op=LOAD Apr 12 18:57:21.340000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:57:21.340000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:57:21.341128 systemd[1]: Starting systemd-udevd.service... Apr 12 18:57:21.355396 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Apr 12 18:57:21.367768 systemd[1]: Started systemd-udevd.service. Apr 12 18:57:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.369000 audit: BPF prog-id=20 op=LOAD Apr 12 18:57:21.370547 systemd[1]: Starting systemd-networkd.service... Apr 12 18:57:21.374000 audit: BPF prog-id=21 op=LOAD Apr 12 18:57:21.375000 audit: BPF prog-id=22 op=LOAD Apr 12 18:57:21.375000 audit: BPF prog-id=23 op=LOAD Apr 12 18:57:21.375813 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:57:21.386117 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:57:21.400409 systemd[1]: Started systemd-userdbd.service. Apr 12 18:57:21.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.426144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:57:21.440098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:57:21.443346 systemd-networkd[1021]: lo: Link UP Apr 12 18:57:21.443356 systemd-networkd[1021]: lo: Gained carrier Apr 12 18:57:21.443780 systemd-networkd[1021]: Enumeration completed Apr 12 18:57:21.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.443862 systemd[1]: Started systemd-networkd.service. Apr 12 18:57:21.443907 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:57:21.444954 systemd-networkd[1021]: eth0: Link UP Apr 12 18:57:21.444958 systemd-networkd[1021]: eth0: Gained carrier Apr 12 18:57:21.446111 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:57:21.450000 audit[1014]: AVC avc: denied { confidentiality } for pid=1014 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:57:21.457179 systemd-networkd[1021]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 12 18:57:21.450000 audit[1014]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559a69fc1a80 a1=32194 a2=7f8db4a71bc5 a3=5 items=108 ppid=1011 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:57:21.450000 audit: CWD cwd="/" Apr 12 18:57:21.450000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=1 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=2 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=3 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=4 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=5 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=6 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=7 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=8 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=9 name=(null) inode=14568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=10 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=11 name=(null) inode=14569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=12 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=13 name=(null) inode=14570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=14 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=15 name=(null) inode=14571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=16 name=(null) inode=14567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=17 name=(null) inode=14572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=18 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=19 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=20 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=21 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=22 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=23 name=(null) inode=14575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=24 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=25 name=(null) inode=14576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=26 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=27 name=(null) inode=14577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=28 name=(null) inode=14573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=29 name=(null) inode=14578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=30 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=31 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=32 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=33 name=(null) inode=14580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=34 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=35 name=(null) inode=14581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=36 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=37 name=(null) inode=14582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=38 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=39 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=40 name=(null) inode=14579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=41 name=(null) inode=14584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=42 name=(null) inode=14564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=43 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=44 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=45 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=46 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=47 name=(null) inode=14587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=48 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=49 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=50 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=51 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=52 name=(null) inode=14585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=53 name=(null) inode=14590 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=55 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=56 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=57 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=58 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=59 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=60 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=61 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=62 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=63 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=64 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=65 name=(null) inode=14596 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=66 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=67 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=68 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=69 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=70 name=(null) inode=14594 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=71 name=(null) inode=14599 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=72 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=73 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=74 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=75 name=(null) inode=14601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=76 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=77 name=(null) inode=14602 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=78 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=79 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=80 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=81 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=82 name=(null) inode=14600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=83 name=(null) inode=14605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=84 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=85 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=86 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=87 name=(null) inode=14607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=88 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=89 name=(null) inode=14608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=90 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=91 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=92 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=93 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=94 name=(null) inode=14606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=95 name=(null) inode=14611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=96 name=(null) inode=14591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=97 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=98 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=99 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=100 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=101 name=(null) inode=14614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=102 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=103 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=104 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=105 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=106 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PATH item=107 name=(null) inode=14617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:57:21.450000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:57:21.482098 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Apr 12 18:57:21.482320 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 12 18:57:21.493099 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:57:21.534681 kernel: kvm: Nested Virtualization enabled Apr 12 18:57:21.534764 kernel: SVM: kvm: Nested Paging enabled Apr 12 18:57:21.534779 kernel: SVM: Virtual VMLOAD VMSAVE supported Apr 12 18:57:21.534833 kernel: SVM: Virtual GIF supported Apr 12 18:57:21.550092 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:57:21.572429 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:57:21.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.574421 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:57:21.581231 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:57:21.605795 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:57:21.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.606829 systemd[1]: Reached target cryptsetup.target. Apr 12 18:57:21.608532 systemd[1]: Starting lvm2-activation.service... Apr 12 18:57:21.611335 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:57:21.633665 systemd[1]: Finished lvm2-activation.service. Apr 12 18:57:21.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.634579 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:57:21.635435 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:57:21.635457 systemd[1]: Reached target local-fs.target. Apr 12 18:57:21.636267 systemd[1]: Reached target machines.target. Apr 12 18:57:21.638174 systemd[1]: Starting ldconfig.service... Apr 12 18:57:21.639051 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:57:21.639119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:57:21.639909 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:57:21.641478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:57:21.643459 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:57:21.644557 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:57:21.644593 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:57:21.645668 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:57:21.649268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:57:21.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.650769 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1049 (bootctl) Apr 12 18:57:21.651661 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:57:21.657781 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:57:21.659104 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:57:21.660406 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:57:21.685055 systemd-fsck[1057]: fsck.fat 4.2 (2021-01-31) Apr 12 18:57:21.685055 systemd-fsck[1057]: /dev/vda1: 789 files, 119240/258078 clusters Apr 12 18:57:21.686859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:57:21.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.689527 systemd[1]: Mounting boot.mount... Apr 12 18:57:21.701240 systemd[1]: Mounted boot.mount. Apr 12 18:57:21.711861 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:57:21.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.989984 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:57:21.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:21.992685 systemd[1]: Starting audit-rules.service... Apr 12 18:57:21.994251 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:57:21.996369 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:57:21.998000 audit: BPF prog-id=24 op=LOAD Apr 12 18:57:22.000000 audit: BPF prog-id=25 op=LOAD Apr 12 18:57:21.998804 systemd[1]: Starting systemd-resolved.service... Apr 12 18:57:22.001130 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:57:22.003729 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:57:22.004977 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:57:22.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:22.006279 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:57:22.010000 audit[1071]: SYSTEM_BOOT pid=1071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:57:22.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:22.012710 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:57:22.016339 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:57:22.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:57:22.026000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:57:22.026000 audit[1080]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcaa44a4c0 a2=420 a3=0 items=0 ppid=1060 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:57:22.026000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:57:22.026545 augenrules[1080]: No rules Apr 12 18:57:22.026869 systemd[1]: Finished audit-rules.service. Apr 12 18:57:22.028551 ldconfig[1048]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:57:22.047276 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:57:22.975677 systemd[1]: Reached target time-set.target. Apr 12 18:57:22.975734 systemd-timesyncd[1070]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 12 18:57:22.975768 systemd-timesyncd[1070]: Initial clock synchronization to Fri 2024-04-12 18:57:22.975672 UTC. Apr 12 18:57:23.329362 systemd-resolved[1064]: Positive Trust Anchors: Apr 12 18:57:23.329376 systemd-resolved[1064]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:57:23.329414 systemd-resolved[1064]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:57:23.336242 systemd-resolved[1064]: Defaulting to hostname 'linux'. Apr 12 18:57:23.337826 systemd[1]: Started systemd-resolved.service. Apr 12 18:57:23.338836 systemd[1]: Reached target network.target. Apr 12 18:57:23.339644 systemd[1]: Reached target nss-lookup.target. Apr 12 18:57:23.387301 systemd[1]: Finished ldconfig.service. Apr 12 18:57:23.389967 systemd[1]: Starting systemd-update-done.service... Apr 12 18:57:23.400714 systemd[1]: Finished systemd-update-done.service. Apr 12 18:57:23.401783 systemd[1]: Reached target sysinit.target. Apr 12 18:57:23.402789 systemd[1]: Started motdgen.path. Apr 12 18:57:23.403629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:57:23.405016 systemd[1]: Started logrotate.timer. Apr 12 18:57:23.405846 systemd[1]: Started mdadm.timer. Apr 12 18:57:23.406564 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:57:23.407463 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:57:23.407490 systemd[1]: Reached target paths.target. Apr 12 18:57:23.408269 systemd[1]: Reached target timers.target. Apr 12 18:57:23.409340 systemd[1]: Listening on dbus.socket. Apr 12 18:57:23.411174 systemd[1]: Starting docker.socket... Apr 12 18:57:23.413999 systemd[1]: Listening on sshd.socket. Apr 12 18:57:23.414866 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:57:23.416136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:57:23.416647 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:57:23.417732 systemd[1]: Listening on docker.socket. Apr 12 18:57:23.418636 systemd[1]: Reached target sockets.target. Apr 12 18:57:23.419481 systemd[1]: Reached target basic.target. Apr 12 18:57:23.420326 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:57:23.420352 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:57:23.421265 systemd[1]: Starting containerd.service... Apr 12 18:57:23.423332 systemd[1]: Starting dbus.service... Apr 12 18:57:23.424972 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:57:23.426718 systemd[1]: Starting extend-filesystems.service... Apr 12 18:57:23.427693 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:57:23.428704 systemd[1]: Starting motdgen.service... Apr 12 18:57:23.429644 jq[1094]: false Apr 12 18:57:23.431020 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:57:23.432837 systemd[1]: Starting prepare-critools.service... Apr 12 18:57:23.434555 systemd[1]: Starting prepare-helm.service... Apr 12 18:57:23.435184 extend-filesystems[1095]: Found sr0 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda1 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda2 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda3 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found usr Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda4 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda6 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda7 Apr 12 18:57:23.435184 extend-filesystems[1095]: Found vda9 Apr 12 18:57:23.435184 extend-filesystems[1095]: Checking size of /dev/vda9 Apr 12 18:57:23.437982 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:57:23.439210 systemd[1]: Starting sshd-keygen.service... Apr 12 18:57:23.445978 systemd[1]: Starting systemd-logind.service... Apr 12 18:57:23.446530 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:57:23.446575 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:57:23.446970 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:57:23.447529 systemd[1]: Starting update-engine.service... Apr 12 18:57:23.488630 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 12 18:57:23.488991 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 12 18:57:23.489018 extend-filesystems[1095]: Resized partition /dev/vda9 Apr 12 18:57:23.471191 dbus-daemon[1093]: [system] SELinux support is enabled Apr 12 18:57:23.450647 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:57:23.508000 update_engine[1113]: I0412 18:57:23.506117 1113 main.cc:92] Flatcar Update Engine starting Apr 12 18:57:23.508204 extend-filesystems[1119]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:57:23.453166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:57:23.518666 jq[1115]: true Apr 12 18:57:23.518825 extend-filesystems[1119]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 12 18:57:23.518825 extend-filesystems[1119]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:57:23.518825 extend-filesystems[1119]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 12 18:57:23.522890 update_engine[1113]: I0412 18:57:23.513693 1113 update_check_scheduler.cc:74] Next update check in 7m38s Apr 12 18:57:23.453725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:57:23.523073 extend-filesystems[1095]: Resized filesystem in /dev/vda9 Apr 12 18:57:23.458904 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:57:23.524152 tar[1121]: crictl Apr 12 18:57:23.459090 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:57:23.524440 tar[1120]: ./ Apr 12 18:57:23.524440 tar[1120]: ./loopback Apr 12 18:57:23.471203 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:57:23.524680 tar[1122]: linux-amd64/helm Apr 12 18:57:23.471367 systemd[1]: Finished motdgen.service. Apr 12 18:57:23.524885 jq[1125]: true Apr 12 18:57:23.472478 systemd[1]: Started dbus.service. Apr 12 18:57:23.477009 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:57:23.477031 systemd[1]: Reached target system-config.target. Apr 12 18:57:23.478640 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:57:23.478685 systemd[1]: Reached target user-config.target. Apr 12 18:57:23.506175 systemd-logind[1112]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:57:23.506193 systemd-logind[1112]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:57:23.506465 systemd-logind[1112]: New seat seat0. Apr 12 18:57:23.508348 systemd[1]: Started systemd-logind.service. Apr 12 18:57:23.510552 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:57:23.510703 systemd[1]: Finished extend-filesystems.service. Apr 12 18:57:23.513143 systemd[1]: Started update-engine.service. Apr 12 18:57:23.515957 systemd[1]: Started locksmithd.service. Apr 12 18:57:23.534641 bash[1150]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:57:23.535264 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:57:23.536170 env[1126]: time="2024-04-12T18:57:23.536124433Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:57:23.567997 tar[1120]: ./bandwidth Apr 12 18:57:23.599326 tar[1120]: ./ptp Apr 12 18:57:23.607555 env[1126]: time="2024-04-12T18:57:23.607502778Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:57:23.607740 env[1126]: time="2024-04-12T18:57:23.607709466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.609707 env[1126]: time="2024-04-12T18:57:23.609669432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:57:23.609750 env[1126]: time="2024-04-12T18:57:23.609706061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.609961 env[1126]: time="2024-04-12T18:57:23.609928839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610010 env[1126]: time="2024-04-12T18:57:23.609958685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610010 env[1126]: time="2024-04-12T18:57:23.609977771Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:57:23.610010 env[1126]: time="2024-04-12T18:57:23.609991286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610100 env[1126]: time="2024-04-12T18:57:23.610071727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610329 env[1126]: time="2024-04-12T18:57:23.610299855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610485 env[1126]: time="2024-04-12T18:57:23.610454255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:57:23.610485 env[1126]: time="2024-04-12T18:57:23.610480824Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:57:23.610552 env[1126]: time="2024-04-12T18:57:23.610533273Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:57:23.610552 env[1126]: time="2024-04-12T18:57:23.610546888Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:57:23.618277 env[1126]: time="2024-04-12T18:57:23.618243345Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:57:23.618319 env[1126]: time="2024-04-12T18:57:23.618282518Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:57:23.618319 env[1126]: time="2024-04-12T18:57:23.618299771Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:57:23.618360 env[1126]: time="2024-04-12T18:57:23.618339315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618380 env[1126]: time="2024-04-12T18:57:23.618356156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618380 env[1126]: time="2024-04-12T18:57:23.618371725Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618512 env[1126]: time="2024-04-12T18:57:23.618403846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618512 env[1126]: time="2024-04-12T18:57:23.618421940Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618512 env[1126]: time="2024-04-12T18:57:23.618479488Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618512 env[1126]: time="2024-04-12T18:57:23.618498533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618598 env[1126]: time="2024-04-12T18:57:23.618515896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.618598 env[1126]: time="2024-04-12T18:57:23.618532858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:57:23.618641 env[1126]: time="2024-04-12T18:57:23.618632404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:57:23.618737 env[1126]: time="2024-04-12T18:57:23.618711843Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:57:23.618995 env[1126]: time="2024-04-12T18:57:23.618968876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:57:23.619041 env[1126]: time="2024-04-12T18:57:23.619003731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619041 env[1126]: time="2024-04-12T18:57:23.619019921Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:57:23.619084 env[1126]: time="2024-04-12T18:57:23.619069404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619104 env[1126]: time="2024-04-12T18:57:23.619087708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619130 env[1126]: time="2024-04-12T18:57:23.619103608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619130 env[1126]: time="2024-04-12T18:57:23.619119057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619168 env[1126]: time="2024-04-12T18:57:23.619133645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619168 env[1126]: time="2024-04-12T18:57:23.619148993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619168 env[1126]: time="2024-04-12T18:57:23.619161797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619224 env[1126]: time="2024-04-12T18:57:23.619176575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619224 env[1126]: time="2024-04-12T18:57:23.619192174Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:57:23.619357 env[1126]: time="2024-04-12T18:57:23.619318602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619357 env[1126]: time="2024-04-12T18:57:23.619346033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619431 env[1126]: time="2024-04-12T18:57:23.619360029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619431 env[1126]: time="2024-04-12T18:57:23.619374627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:57:23.619431 env[1126]: time="2024-04-12T18:57:23.619404533Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:57:23.619431 env[1126]: time="2024-04-12T18:57:23.619417988Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:57:23.619518 env[1126]: time="2024-04-12T18:57:23.619440691Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:57:23.619518 env[1126]: time="2024-04-12T18:57:23.619478281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:57:23.619760 env[1126]: time="2024-04-12T18:57:23.619690028Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:57:23.620361 env[1126]: time="2024-04-12T18:57:23.619762534Z" level=info msg="Connect containerd service" Apr 12 18:57:23.620361 env[1126]: time="2024-04-12T18:57:23.619795266Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:57:23.620438 env[1126]: time="2024-04-12T18:57:23.620353122Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:57:23.620626 env[1126]: time="2024-04-12T18:57:23.620599594Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:57:23.620681 env[1126]: time="2024-04-12T18:57:23.620644368Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:57:23.620799 systemd[1]: Started containerd.service. Apr 12 18:57:23.621729 env[1126]: time="2024-04-12T18:57:23.620829486Z" level=info msg="Start subscribing containerd event" Apr 12 18:57:23.621729 env[1126]: time="2024-04-12T18:57:23.621367976Z" level=info msg="containerd successfully booted in 0.093780s" Apr 12 18:57:23.622033 env[1126]: time="2024-04-12T18:57:23.621816637Z" level=info msg="Start recovering state" Apr 12 18:57:23.622033 env[1126]: time="2024-04-12T18:57:23.621884645Z" level=info msg="Start event monitor" Apr 12 18:57:23.622033 env[1126]: time="2024-04-12T18:57:23.621907888Z" level=info msg="Start snapshots syncer" Apr 12 18:57:23.622033 env[1126]: time="2024-04-12T18:57:23.621929719Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:57:23.622033 env[1126]: time="2024-04-12T18:57:23.621935851Z" level=info msg="Start streaming server" Apr 12 18:57:23.634655 tar[1120]: ./vlan Apr 12 18:57:23.668893 tar[1120]: ./host-device Apr 12 18:57:23.701299 tar[1120]: ./tuning Apr 12 18:57:23.732407 tar[1120]: ./vrf Apr 12 18:57:23.763965 tar[1120]: ./sbr Apr 12 18:57:23.793724 tar[1120]: ./tap Apr 12 18:57:23.817494 systemd-networkd[1021]: eth0: Gained IPv6LL Apr 12 18:57:23.828133 tar[1120]: ./dhcp Apr 12 18:57:23.913294 tar[1120]: ./static Apr 12 18:57:23.933733 tar[1122]: linux-amd64/LICENSE Apr 12 18:57:23.933889 tar[1122]: linux-amd64/README.md Apr 12 18:57:23.937059 tar[1120]: ./firewall Apr 12 18:57:23.938143 systemd[1]: Finished prepare-helm.service. Apr 12 18:57:23.951852 locksmithd[1153]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:57:23.969986 systemd[1]: Finished prepare-critools.service. Apr 12 18:57:23.973128 tar[1120]: ./macvlan Apr 12 18:57:24.002325 tar[1120]: ./dummy Apr 12 18:57:24.031192 tar[1120]: ./bridge Apr 12 18:57:24.062787 tar[1120]: ./ipvlan Apr 12 18:57:24.091915 tar[1120]: ./portmap Apr 12 18:57:24.119489 tar[1120]: ./host-local Apr 12 18:57:24.151833 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:57:24.649180 sshd_keygen[1111]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:57:24.666229 systemd[1]: Finished sshd-keygen.service. Apr 12 18:57:24.668283 systemd[1]: Starting issuegen.service... Apr 12 18:57:24.672875 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:57:24.672999 systemd[1]: Finished issuegen.service. Apr 12 18:57:24.674770 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:57:24.679551 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:57:24.681496 systemd[1]: Started getty@tty1.service. Apr 12 18:57:24.683180 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:57:24.684154 systemd[1]: Reached target getty.target. Apr 12 18:57:24.684964 systemd[1]: Reached target multi-user.target. Apr 12 18:57:24.686623 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:57:24.693557 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:57:24.693714 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:57:24.694808 systemd[1]: Startup finished in 556ms (kernel) + 5.977s (initrd) + 6.125s (userspace) = 12.659s. Apr 12 18:57:32.569128 systemd[1]: Created slice system-sshd.slice. Apr 12 18:57:32.570037 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:46022.service. Apr 12 18:57:32.606164 sshd[1183]: Accepted publickey for core from 10.0.0.1 port 46022 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:57:32.607186 sshd[1183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:32.614365 systemd-logind[1112]: New session 1 of user core. Apr 12 18:57:32.615155 systemd[1]: Created slice user-500.slice. Apr 12 18:57:32.615971 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:57:32.621963 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:57:32.622918 systemd[1]: Starting user@500.service... Apr 12 18:57:32.625265 (systemd)[1186]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:32.687424 systemd[1186]: Queued start job for default target default.target. Apr 12 18:57:32.687767 systemd[1186]: Reached target paths.target. Apr 12 18:57:32.687784 systemd[1186]: Reached target sockets.target. Apr 12 18:57:32.687795 systemd[1186]: Reached target timers.target. Apr 12 18:57:32.687805 systemd[1186]: Reached target basic.target. Apr 12 18:57:32.687835 systemd[1186]: Reached target default.target. Apr 12 18:57:32.687855 systemd[1186]: Startup finished in 57ms. Apr 12 18:57:32.687917 systemd[1]: Started user@500.service. Apr 12 18:57:32.688785 systemd[1]: Started session-1.scope. Apr 12 18:57:32.737724 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:46028.service. Apr 12 18:57:32.774188 sshd[1195]: Accepted publickey for core from 10.0.0.1 port 46028 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:57:32.775400 sshd[1195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:32.778449 systemd-logind[1112]: New session 2 of user core. Apr 12 18:57:32.779106 systemd[1]: Started session-2.scope. Apr 12 18:57:32.831093 sshd[1195]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:32.833350 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:46028.service: Deactivated successfully. Apr 12 18:57:32.833830 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:57:32.834220 systemd-logind[1112]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:57:32.835260 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:46042.service. Apr 12 18:57:32.835858 systemd-logind[1112]: Removed session 2. Apr 12 18:57:32.868731 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 46042 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:57:32.869732 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:32.872621 systemd-logind[1112]: New session 3 of user core. Apr 12 18:57:32.873360 systemd[1]: Started session-3.scope. Apr 12 18:57:32.920871 sshd[1201]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:32.922978 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:46042.service: Deactivated successfully. Apr 12 18:57:32.923464 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:57:32.923873 systemd-logind[1112]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:57:32.924732 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:46054.service. Apr 12 18:57:32.925290 systemd-logind[1112]: Removed session 3. Apr 12 18:57:32.957935 sshd[1207]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:57:32.958820 sshd[1207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:32.961565 systemd-logind[1112]: New session 4 of user core. Apr 12 18:57:32.962250 systemd[1]: Started session-4.scope. Apr 12 18:57:33.013661 sshd[1207]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:33.015917 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:46054.service: Deactivated successfully. Apr 12 18:57:33.016349 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:57:33.016981 systemd-logind[1112]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:57:33.017826 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:46060.service. Apr 12 18:57:33.018449 systemd-logind[1112]: Removed session 4. Apr 12 18:57:33.054593 sshd[1213]: Accepted publickey for core from 10.0.0.1 port 46060 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:57:33.055685 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:33.058491 systemd-logind[1112]: New session 5 of user core. Apr 12 18:57:33.059163 systemd[1]: Started session-5.scope. Apr 12 18:57:33.112879 sudo[1216]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:57:33.113035 sudo[1216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:57:33.651017 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:57:33.655960 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:57:33.656242 systemd[1]: Reached target network-online.target. Apr 12 18:57:33.657462 systemd[1]: Starting docker.service... Apr 12 18:57:33.689819 env[1234]: time="2024-04-12T18:57:33.689768968Z" level=info msg="Starting up" Apr 12 18:57:33.691016 env[1234]: time="2024-04-12T18:57:33.690978847Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:57:33.691016 env[1234]: time="2024-04-12T18:57:33.691001049Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:57:33.691016 env[1234]: time="2024-04-12T18:57:33.691018702Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:57:33.691202 env[1234]: time="2024-04-12T18:57:33.691027759Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:57:33.692378 env[1234]: time="2024-04-12T18:57:33.692350059Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:57:33.692378 env[1234]: time="2024-04-12T18:57:33.692366751Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:57:33.692478 env[1234]: time="2024-04-12T18:57:33.692383192Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:57:33.692478 env[1234]: time="2024-04-12T18:57:33.692402929Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:57:34.423263 env[1234]: time="2024-04-12T18:57:34.423225373Z" level=info msg="Loading containers: start." Apr 12 18:57:34.513414 kernel: Initializing XFRM netlink socket Apr 12 18:57:34.538612 env[1234]: time="2024-04-12T18:57:34.538577891Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:57:34.581615 systemd-networkd[1021]: docker0: Link UP Apr 12 18:57:34.589574 env[1234]: time="2024-04-12T18:57:34.589538298Z" level=info msg="Loading containers: done." Apr 12 18:57:34.596589 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1921000658-merged.mount: Deactivated successfully. Apr 12 18:57:34.598830 env[1234]: time="2024-04-12T18:57:34.598795722Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:57:34.598946 env[1234]: time="2024-04-12T18:57:34.598923943Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:57:34.599015 env[1234]: time="2024-04-12T18:57:34.598993604Z" level=info msg="Daemon has completed initialization" Apr 12 18:57:34.614028 systemd[1]: Started docker.service. Apr 12 18:57:34.617442 env[1234]: time="2024-04-12T18:57:34.617404738Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:57:34.631674 systemd[1]: Reloading. Apr 12 18:57:34.694686 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2024-04-12T18:57:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:57:34.695246 /usr/lib/systemd/system-generators/torcx-generator[1377]: time="2024-04-12T18:57:34Z" level=info msg="torcx already run" Apr 12 18:57:34.752465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:57:34.752480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:57:34.770835 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:57:34.835813 systemd[1]: Started kubelet.service. Apr 12 18:57:34.878188 kubelet[1417]: E0412 18:57:34.878118 1417 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:57:34.879685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:57:34.879789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:57:35.230936 env[1126]: time="2024-04-12T18:57:35.230896473Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:57:35.767648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831979046.mount: Deactivated successfully. Apr 12 18:57:37.336368 env[1126]: time="2024-04-12T18:57:37.336310498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:37.338056 env[1126]: time="2024-04-12T18:57:37.338005697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:37.339621 env[1126]: time="2024-04-12T18:57:37.339592404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:37.343061 env[1126]: time="2024-04-12T18:57:37.343034229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:37.344017 env[1126]: time="2024-04-12T18:57:37.343986516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809\"" Apr 12 18:57:37.351652 env[1126]: time="2024-04-12T18:57:37.351625364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:57:39.464375 env[1126]: time="2024-04-12T18:57:39.464319574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:39.466294 env[1126]: time="2024-04-12T18:57:39.466248623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:39.467763 env[1126]: time="2024-04-12T18:57:39.467732636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:39.470050 env[1126]: time="2024-04-12T18:57:39.470016370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:39.470649 env[1126]: time="2024-04-12T18:57:39.470611186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7\"" Apr 12 18:57:39.478800 env[1126]: time="2024-04-12T18:57:39.478765110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:57:41.035871 env[1126]: time="2024-04-12T18:57:41.035809861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:41.037701 env[1126]: time="2024-04-12T18:57:41.037661474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:41.039328 env[1126]: time="2024-04-12T18:57:41.039300879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:41.040802 env[1126]: time="2024-04-12T18:57:41.040768171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:41.041422 env[1126]: time="2024-04-12T18:57:41.041381882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833\"" Apr 12 18:57:41.048807 env[1126]: time="2024-04-12T18:57:41.048785720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:57:42.134410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480276138.mount: Deactivated successfully. Apr 12 18:57:42.628531 env[1126]: time="2024-04-12T18:57:42.628404844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:42.630186 env[1126]: time="2024-04-12T18:57:42.630146621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:42.631469 env[1126]: time="2024-04-12T18:57:42.631440257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:42.632695 env[1126]: time="2024-04-12T18:57:42.632665496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:42.633061 env[1126]: time="2024-04-12T18:57:42.633028246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d\"" Apr 12 18:57:42.641037 env[1126]: time="2024-04-12T18:57:42.641016510Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:57:43.220179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003991587.mount: Deactivated successfully. Apr 12 18:57:43.225113 env[1126]: time="2024-04-12T18:57:43.225080697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:43.226708 env[1126]: time="2024-04-12T18:57:43.226661192Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:43.228012 env[1126]: time="2024-04-12T18:57:43.227986678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:43.229337 env[1126]: time="2024-04-12T18:57:43.229300773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:43.230576 env[1126]: time="2024-04-12T18:57:43.230541620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:57:43.239755 env[1126]: time="2024-04-12T18:57:43.239718233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:57:44.090013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027307941.mount: Deactivated successfully. Apr 12 18:57:45.130472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:57:45.130646 systemd[1]: Stopped kubelet.service. Apr 12 18:57:45.131846 systemd[1]: Started kubelet.service. Apr 12 18:57:45.169279 kubelet[1470]: E0412 18:57:45.169219 1470 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:57:45.172263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:57:45.172402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:57:49.020356 env[1126]: time="2024-04-12T18:57:49.020301816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:49.022058 env[1126]: time="2024-04-12T18:57:49.022018826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:49.023552 env[1126]: time="2024-04-12T18:57:49.023521094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:49.025941 env[1126]: time="2024-04-12T18:57:49.025913502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:49.026527 env[1126]: time="2024-04-12T18:57:49.026498749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Apr 12 18:57:49.035340 env[1126]: time="2024-04-12T18:57:49.035295890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:57:49.663812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808310702.mount: Deactivated successfully. Apr 12 18:57:50.295847 env[1126]: time="2024-04-12T18:57:50.295799304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:50.297492 env[1126]: time="2024-04-12T18:57:50.297461242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:50.298865 env[1126]: time="2024-04-12T18:57:50.298845238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:50.300036 env[1126]: time="2024-04-12T18:57:50.300010263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:50.300466 env[1126]: time="2024-04-12T18:57:50.300422226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 18:57:52.737956 systemd[1]: Stopped kubelet.service. Apr 12 18:57:52.749282 systemd[1]: Reloading. Apr 12 18:57:52.818898 /usr/lib/systemd/system-generators/torcx-generator[1584]: time="2024-04-12T18:57:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:57:52.818926 /usr/lib/systemd/system-generators/torcx-generator[1584]: time="2024-04-12T18:57:52Z" level=info msg="torcx already run" Apr 12 18:57:52.876622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:57:52.876639 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:57:52.895085 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:57:52.967791 systemd[1]: Started kubelet.service. Apr 12 18:57:53.010197 kubelet[1625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:57:53.010622 kubelet[1625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:57:53.010622 kubelet[1625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:57:53.010781 kubelet[1625]: I0412 18:57:53.010668 1625 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:57:53.344622 kubelet[1625]: I0412 18:57:53.344532 1625 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:57:53.344622 kubelet[1625]: I0412 18:57:53.344558 1625 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:57:53.344779 kubelet[1625]: I0412 18:57:53.344755 1625 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:57:53.347824 kubelet[1625]: I0412 18:57:53.347790 1625 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:57:53.348425 kubelet[1625]: E0412 18:57:53.348404 1625 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.351729 kubelet[1625]: I0412 18:57:53.351705 1625 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:57:53.352113 kubelet[1625]: I0412 18:57:53.352092 1625 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:57:53.352192 kubelet[1625]: I0412 18:57:53.352169 1625 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:57:53.352192 kubelet[1625]: I0412 18:57:53.352192 1625 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:57:53.352299 kubelet[1625]: I0412 18:57:53.352202 1625 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:57:53.352299 kubelet[1625]: I0412 18:57:53.352272 1625 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:57:53.355876 kubelet[1625]: I0412 18:57:53.355859 1625 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:57:53.355943 kubelet[1625]: I0412 18:57:53.355882 1625 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:57:53.355943 kubelet[1625]: I0412 18:57:53.355900 1625 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:57:53.355943 kubelet[1625]: I0412 18:57:53.355912 1625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:57:53.356477 kubelet[1625]: W0412 18:57:53.356432 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.356477 kubelet[1625]: E0412 18:57:53.356481 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.356612 kubelet[1625]: I0412 18:57:53.356501 1625 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:57:53.356612 kubelet[1625]: W0412 18:57:53.356508 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.356612 kubelet[1625]: E0412 18:57:53.356551 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.356787 kubelet[1625]: W0412 18:57:53.356754 1625 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:57:53.357235 kubelet[1625]: I0412 18:57:53.357222 1625 server.go:1168] "Started kubelet" Apr 12 18:57:53.357383 kubelet[1625]: I0412 18:57:53.357364 1625 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:57:53.357514 kubelet[1625]: I0412 18:57:53.357486 1625 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:57:53.357576 kubelet[1625]: E0412 18:57:53.357464 1625 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59d6108130c34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 57, 53, 357196340, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 57, 53, 357196340, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.142:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.142:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:57:53.358193 kubelet[1625]: E0412 18:57:53.358170 1625 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:57:53.358246 kubelet[1625]: E0412 18:57:53.358202 1625 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:57:53.358468 kubelet[1625]: I0412 18:57:53.358442 1625 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:57:53.360741 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:57:53.360888 kubelet[1625]: I0412 18:57:53.360871 1625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:57:53.361089 kubelet[1625]: I0412 18:57:53.361059 1625 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:57:53.361179 kubelet[1625]: I0412 18:57:53.361152 1625 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:57:53.361363 kubelet[1625]: E0412 18:57:53.361338 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Apr 12 18:57:53.361531 kubelet[1625]: E0412 18:57:53.361160 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:53.361613 kubelet[1625]: W0412 18:57:53.361438 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.361708 kubelet[1625]: E0412 18:57:53.361693 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.372512 kubelet[1625]: I0412 18:57:53.372490 1625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:57:53.373459 kubelet[1625]: I0412 18:57:53.373445 1625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:57:53.373556 kubelet[1625]: I0412 18:57:53.373541 1625 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:57:53.373637 kubelet[1625]: I0412 18:57:53.373623 1625 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:57:53.373764 kubelet[1625]: E0412 18:57:53.373736 1625 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:57:53.377556 kubelet[1625]: W0412 18:57:53.377504 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.377556 kubelet[1625]: E0412 18:57:53.377549 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:53.378496 kubelet[1625]: I0412 18:57:53.378476 1625 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:57:53.378605 kubelet[1625]: I0412 18:57:53.378590 1625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:57:53.378684 kubelet[1625]: I0412 18:57:53.378671 1625 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:57:53.382563 kubelet[1625]: I0412 18:57:53.382536 1625 policy_none.go:49] "None policy: Start" Apr 12 18:57:53.383001 kubelet[1625]: I0412 18:57:53.382971 1625 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:57:53.383001 kubelet[1625]: I0412 18:57:53.382994 1625 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:57:53.387371 systemd[1]: Created slice kubepods.slice. Apr 12 18:57:53.390459 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:57:53.392605 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:57:53.404061 kubelet[1625]: I0412 18:57:53.404033 1625 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:57:53.404323 kubelet[1625]: I0412 18:57:53.404296 1625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:57:53.404636 kubelet[1625]: E0412 18:57:53.404599 1625 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 12 18:57:53.462872 kubelet[1625]: I0412 18:57:53.462854 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:57:53.463220 kubelet[1625]: E0412 18:57:53.463202 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 12 18:57:53.474292 kubelet[1625]: I0412 18:57:53.474259 1625 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:53.475044 kubelet[1625]: I0412 18:57:53.475006 1625 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:53.475610 kubelet[1625]: I0412 18:57:53.475580 1625 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:53.479848 systemd[1]: Created slice kubepods-burstable-podf9e08da5b52f1db9fbf9c01a698b7e66.slice. Apr 12 18:57:53.497317 systemd[1]: Created slice kubepods-burstable-podb23ea803843027eb81926493bf073366.slice. Apr 12 18:57:53.504964 systemd[1]: Created slice kubepods-burstable-pod2f7d78630cba827a770c684e2dbe6ce6.slice. Apr 12 18:57:53.561939 kubelet[1625]: E0412 18:57:53.561903 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Apr 12 18:57:53.662206 kubelet[1625]: I0412 18:57:53.662132 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:53.662206 kubelet[1625]: I0412 18:57:53.662186 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:57:53.662296 kubelet[1625]: I0412 18:57:53.662215 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:53.662296 kubelet[1625]: I0412 18:57:53.662251 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:53.662296 kubelet[1625]: I0412 18:57:53.662276 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:53.662427 kubelet[1625]: I0412 18:57:53.662301 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:53.662427 kubelet[1625]: I0412 18:57:53.662323 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:53.662427 kubelet[1625]: I0412 18:57:53.662349 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:53.662427 kubelet[1625]: I0412 18:57:53.662369 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:53.664853 kubelet[1625]: I0412 18:57:53.664827 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:57:53.665065 kubelet[1625]: E0412 18:57:53.665048 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 12 18:57:53.795688 kubelet[1625]: E0412 18:57:53.795656 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:53.796188 env[1126]: time="2024-04-12T18:57:53.796140694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f9e08da5b52f1db9fbf9c01a698b7e66,Namespace:kube-system,Attempt:0,}" Apr 12 18:57:53.804260 kubelet[1625]: E0412 18:57:53.804238 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:53.804541 env[1126]: time="2024-04-12T18:57:53.804512558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,}" Apr 12 18:57:53.806788 kubelet[1625]: E0412 18:57:53.806761 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:53.807130 env[1126]: time="2024-04-12T18:57:53.807087618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,}" Apr 12 18:57:53.963204 kubelet[1625]: E0412 18:57:53.963143 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Apr 12 18:57:54.066406 kubelet[1625]: I0412 18:57:54.066356 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:57:54.066693 kubelet[1625]: E0412 18:57:54.066674 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 12 18:57:54.214643 kubelet[1625]: W0412 18:57:54.214517 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.214643 kubelet[1625]: E0412 18:57:54.214598 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.270067 kubelet[1625]: W0412 18:57:54.270036 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.270122 kubelet[1625]: E0412 18:57:54.270070 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.375482 kubelet[1625]: W0412 18:57:54.375428 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.375482 kubelet[1625]: E0412 18:57:54.375484 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.388576 kubelet[1625]: W0412 18:57:54.388541 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.388576 kubelet[1625]: E0412 18:57:54.388579 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 12 18:57:54.559420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701728301.mount: Deactivated successfully. Apr 12 18:57:54.564811 env[1126]: time="2024-04-12T18:57:54.564774822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.566365 env[1126]: time="2024-04-12T18:57:54.566331723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.567768 env[1126]: time="2024-04-12T18:57:54.567747568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.568615 env[1126]: time="2024-04-12T18:57:54.568583947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.570091 env[1126]: time="2024-04-12T18:57:54.570055317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.571177 env[1126]: time="2024-04-12T18:57:54.571152515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.572304 env[1126]: time="2024-04-12T18:57:54.572270772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.573465 env[1126]: time="2024-04-12T18:57:54.573435688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.575205 env[1126]: time="2024-04-12T18:57:54.575176843Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.576961 env[1126]: time="2024-04-12T18:57:54.576921887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.578151 env[1126]: time="2024-04-12T18:57:54.578112470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.578656 env[1126]: time="2024-04-12T18:57:54.578630702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:54.593090 env[1126]: time="2024-04-12T18:57:54.593028879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:57:54.593220 env[1126]: time="2024-04-12T18:57:54.593069435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:57:54.593220 env[1126]: time="2024-04-12T18:57:54.593080776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:57:54.593220 env[1126]: time="2024-04-12T18:57:54.593194660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f3b7eed1cf938ad0b4a718d51b3e20b764ae37297a9af35210f5e2e374754a7 pid=1665 runtime=io.containerd.runc.v2 Apr 12 18:57:54.596047 env[1126]: time="2024-04-12T18:57:54.596000122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:57:54.596047 env[1126]: time="2024-04-12T18:57:54.596034907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:57:54.596132 env[1126]: time="2024-04-12T18:57:54.596044956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:57:54.596338 env[1126]: time="2024-04-12T18:57:54.596293863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ac189e9ca619674ab93bd6d36c8bae8cb54f7e29936d346496fcf0b16f98fbd pid=1680 runtime=io.containerd.runc.v2 Apr 12 18:57:54.601459 env[1126]: time="2024-04-12T18:57:54.601406944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:57:54.601627 env[1126]: time="2024-04-12T18:57:54.601434275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:57:54.601627 env[1126]: time="2024-04-12T18:57:54.601444284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:57:54.601627 env[1126]: time="2024-04-12T18:57:54.601521849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df4ee360693cb641bfd005d4a1fa27296a057244c1fb1e172b44acd3acd31198 pid=1710 runtime=io.containerd.runc.v2 Apr 12 18:57:54.605867 systemd[1]: Started cri-containerd-1f3b7eed1cf938ad0b4a718d51b3e20b764ae37297a9af35210f5e2e374754a7.scope. Apr 12 18:57:54.607869 systemd[1]: Started cri-containerd-2ac189e9ca619674ab93bd6d36c8bae8cb54f7e29936d346496fcf0b16f98fbd.scope. Apr 12 18:57:54.632158 systemd[1]: Started cri-containerd-df4ee360693cb641bfd005d4a1fa27296a057244c1fb1e172b44acd3acd31198.scope. Apr 12 18:57:54.645888 env[1126]: time="2024-04-12T18:57:54.645854474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2f7d78630cba827a770c684e2dbe6ce6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3b7eed1cf938ad0b4a718d51b3e20b764ae37297a9af35210f5e2e374754a7\"" Apr 12 18:57:54.646865 kubelet[1625]: E0412 18:57:54.646727 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:54.648714 env[1126]: time="2024-04-12T18:57:54.648694812Z" level=info msg="CreateContainer within sandbox \"1f3b7eed1cf938ad0b4a718d51b3e20b764ae37297a9af35210f5e2e374754a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:57:54.652964 env[1126]: time="2024-04-12T18:57:54.652592002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b23ea803843027eb81926493bf073366,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ac189e9ca619674ab93bd6d36c8bae8cb54f7e29936d346496fcf0b16f98fbd\"" Apr 12 18:57:54.653019 kubelet[1625]: E0412 18:57:54.652848 1625 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17c59d6108130c34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 57, 53, 357196340, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 57, 53, 357196340, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.142:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.142:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:57:54.653360 kubelet[1625]: E0412 18:57:54.653334 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:54.655410 env[1126]: time="2024-04-12T18:57:54.655380012Z" level=info msg="CreateContainer within sandbox \"2ac189e9ca619674ab93bd6d36c8bae8cb54f7e29936d346496fcf0b16f98fbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:57:54.666659 env[1126]: time="2024-04-12T18:57:54.666611008Z" level=info msg="CreateContainer within sandbox \"1f3b7eed1cf938ad0b4a718d51b3e20b764ae37297a9af35210f5e2e374754a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0247cd56a396c51b6b5e6e14b823369b4cebf8b4d16d8149104af4bb3ca4f510\"" Apr 12 18:57:54.667430 env[1126]: time="2024-04-12T18:57:54.667399547Z" level=info msg="StartContainer for \"0247cd56a396c51b6b5e6e14b823369b4cebf8b4d16d8149104af4bb3ca4f510\"" Apr 12 18:57:54.668600 env[1126]: time="2024-04-12T18:57:54.668575523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f9e08da5b52f1db9fbf9c01a698b7e66,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4ee360693cb641bfd005d4a1fa27296a057244c1fb1e172b44acd3acd31198\"" Apr 12 18:57:54.669042 kubelet[1625]: E0412 18:57:54.669009 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:54.670376 env[1126]: time="2024-04-12T18:57:54.670355802Z" level=info msg="CreateContainer within sandbox \"df4ee360693cb641bfd005d4a1fa27296a057244c1fb1e172b44acd3acd31198\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:57:54.677502 env[1126]: time="2024-04-12T18:57:54.677466319Z" level=info msg="CreateContainer within sandbox \"2ac189e9ca619674ab93bd6d36c8bae8cb54f7e29936d346496fcf0b16f98fbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f3a881bc31818ab876798fbb042e4cbd6599502fc1973ebd32d07436f7777ec\"" Apr 12 18:57:54.677835 env[1126]: time="2024-04-12T18:57:54.677811146Z" level=info msg="StartContainer for \"4f3a881bc31818ab876798fbb042e4cbd6599502fc1973ebd32d07436f7777ec\"" Apr 12 18:57:54.687683 systemd[1]: Started cri-containerd-0247cd56a396c51b6b5e6e14b823369b4cebf8b4d16d8149104af4bb3ca4f510.scope. Apr 12 18:57:54.688618 env[1126]: time="2024-04-12T18:57:54.687499789Z" level=info msg="CreateContainer within sandbox \"df4ee360693cb641bfd005d4a1fa27296a057244c1fb1e172b44acd3acd31198\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24194270ed072096ef679c258e965b715898c28304a2aea9cbc8a7387114a5da\"" Apr 12 18:57:54.689970 env[1126]: time="2024-04-12T18:57:54.689909008Z" level=info msg="StartContainer for \"24194270ed072096ef679c258e965b715898c28304a2aea9cbc8a7387114a5da\"" Apr 12 18:57:54.692804 systemd[1]: Started cri-containerd-4f3a881bc31818ab876798fbb042e4cbd6599502fc1973ebd32d07436f7777ec.scope. Apr 12 18:57:54.705364 systemd[1]: Started cri-containerd-24194270ed072096ef679c258e965b715898c28304a2aea9cbc8a7387114a5da.scope. Apr 12 18:57:54.728786 env[1126]: time="2024-04-12T18:57:54.728727040Z" level=info msg="StartContainer for \"0247cd56a396c51b6b5e6e14b823369b4cebf8b4d16d8149104af4bb3ca4f510\" returns successfully" Apr 12 18:57:54.740573 env[1126]: time="2024-04-12T18:57:54.739669354Z" level=info msg="StartContainer for \"4f3a881bc31818ab876798fbb042e4cbd6599502fc1973ebd32d07436f7777ec\" returns successfully" Apr 12 18:57:54.751964 env[1126]: time="2024-04-12T18:57:54.751913351Z" level=info msg="StartContainer for \"24194270ed072096ef679c258e965b715898c28304a2aea9cbc8a7387114a5da\" returns successfully" Apr 12 18:57:54.763613 kubelet[1625]: E0412 18:57:54.763578 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Apr 12 18:57:54.868058 kubelet[1625]: I0412 18:57:54.867939 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:57:55.385083 kubelet[1625]: E0412 18:57:55.385016 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:55.387519 kubelet[1625]: E0412 18:57:55.387485 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:55.393275 kubelet[1625]: E0412 18:57:55.393263 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:55.815054 kubelet[1625]: I0412 18:57:55.815013 1625 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:57:55.820978 kubelet[1625]: E0412 18:57:55.820939 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:55.921621 kubelet[1625]: E0412 18:57:55.921583 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.022205 kubelet[1625]: E0412 18:57:56.022161 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.123376 kubelet[1625]: E0412 18:57:56.123287 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.223807 kubelet[1625]: E0412 18:57:56.223780 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.324364 kubelet[1625]: E0412 18:57:56.324323 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.390851 kubelet[1625]: E0412 18:57:56.390747 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:56.424750 kubelet[1625]: E0412 18:57:56.424719 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.525250 kubelet[1625]: E0412 18:57:56.525217 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.593228 kubelet[1625]: E0412 18:57:56.593204 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:56.626032 kubelet[1625]: E0412 18:57:56.626007 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.726372 kubelet[1625]: E0412 18:57:56.726268 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.826835 kubelet[1625]: E0412 18:57:56.826786 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:56.927099 kubelet[1625]: E0412 18:57:56.927057 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 12 18:57:57.358639 kubelet[1625]: I0412 18:57:57.358595 1625 apiserver.go:52] "Watching apiserver" Apr 12 18:57:57.361491 kubelet[1625]: I0412 18:57:57.361463 1625 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:57:57.382329 kubelet[1625]: I0412 18:57:57.382281 1625 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:57:58.029640 systemd[1]: Reloading. Apr 12 18:57:58.097320 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-04-12T18:57:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:57:58.097347 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-04-12T18:57:58Z" level=info msg="torcx already run" Apr 12 18:57:58.154174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:57:58.154190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:57:58.172817 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:57:58.255062 kubelet[1625]: I0412 18:57:58.255041 1625 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:57:58.255099 systemd[1]: Stopping kubelet.service... Apr 12 18:57:58.273724 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:57:58.274016 systemd[1]: Stopped kubelet.service. Apr 12 18:57:58.275747 systemd[1]: Started kubelet.service. Apr 12 18:57:58.319832 kubelet[1961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:57:58.319832 kubelet[1961]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:57:58.319832 kubelet[1961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:57:58.320241 kubelet[1961]: I0412 18:57:58.319821 1961 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:57:58.323714 kubelet[1961]: I0412 18:57:58.323688 1961 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:57:58.323714 kubelet[1961]: I0412 18:57:58.323705 1961 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:57:58.323856 kubelet[1961]: I0412 18:57:58.323840 1961 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:57:58.325099 kubelet[1961]: I0412 18:57:58.325079 1961 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:57:58.325912 kubelet[1961]: I0412 18:57:58.325894 1961 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:57:58.329556 kubelet[1961]: I0412 18:57:58.329513 1961 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:57:58.329771 kubelet[1961]: I0412 18:57:58.329748 1961 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:57:58.329850 kubelet[1961]: I0412 18:57:58.329832 1961 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:57:58.329934 kubelet[1961]: I0412 18:57:58.329855 1961 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:57:58.329934 kubelet[1961]: I0412 18:57:58.329869 1961 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:57:58.329934 kubelet[1961]: I0412 18:57:58.329898 1961 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:57:58.332544 kubelet[1961]: I0412 18:57:58.332528 1961 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:57:58.332544 kubelet[1961]: I0412 18:57:58.332547 1961 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:57:58.332618 kubelet[1961]: I0412 18:57:58.332564 1961 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:57:58.332618 kubelet[1961]: I0412 18:57:58.332577 1961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:57:58.333436 kubelet[1961]: I0412 18:57:58.333413 1961 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:57:58.333949 kubelet[1961]: I0412 18:57:58.333925 1961 server.go:1168] "Started kubelet" Apr 12 18:57:58.338299 kubelet[1961]: E0412 18:57:58.338018 1961 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:57:58.338411 kubelet[1961]: E0412 18:57:58.338375 1961 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:57:58.340251 kubelet[1961]: I0412 18:57:58.340216 1961 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:57:58.341876 kubelet[1961]: I0412 18:57:58.341852 1961 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:57:58.344352 kubelet[1961]: I0412 18:57:58.344334 1961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:57:58.346200 kubelet[1961]: I0412 18:57:58.346186 1961 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:57:58.347708 kubelet[1961]: I0412 18:57:58.347694 1961 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:57:58.347892 kubelet[1961]: I0412 18:57:58.347878 1961 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:57:58.356366 kubelet[1961]: I0412 18:57:58.356353 1961 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:57:58.357126 kubelet[1961]: I0412 18:57:58.357111 1961 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:57:58.357235 kubelet[1961]: I0412 18:57:58.357221 1961 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:57:58.357311 kubelet[1961]: I0412 18:57:58.357297 1961 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:57:58.357433 kubelet[1961]: E0412 18:57:58.357418 1961 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:57:58.384246 kubelet[1961]: I0412 18:57:58.384215 1961 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:57:58.384246 kubelet[1961]: I0412 18:57:58.384237 1961 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:57:58.384321 kubelet[1961]: I0412 18:57:58.384251 1961 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:57:58.384404 kubelet[1961]: I0412 18:57:58.384372 1961 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:57:58.384404 kubelet[1961]: I0412 18:57:58.384401 1961 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:57:58.384404 kubelet[1961]: I0412 18:57:58.384407 1961 policy_none.go:49] "None policy: Start" Apr 12 18:57:58.384995 kubelet[1961]: I0412 18:57:58.384969 1961 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:57:58.384995 kubelet[1961]: I0412 18:57:58.384992 1961 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:57:58.385131 kubelet[1961]: I0412 18:57:58.385107 1961 state_mem.go:75] "Updated machine memory state" Apr 12 18:57:58.388202 kubelet[1961]: I0412 18:57:58.388179 1961 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:57:58.388414 kubelet[1961]: I0412 18:57:58.388382 1961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:57:58.418489 sudo[1991]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:57:58.418890 sudo[1991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:57:58.450646 kubelet[1961]: I0412 18:57:58.450631 1961 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Apr 12 18:57:58.455763 kubelet[1961]: I0412 18:57:58.455731 1961 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Apr 12 18:57:58.455814 kubelet[1961]: I0412 18:57:58.455789 1961 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Apr 12 18:57:58.458373 kubelet[1961]: I0412 18:57:58.458340 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:58.458508 kubelet[1961]: I0412 18:57:58.458424 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:58.458508 kubelet[1961]: I0412 18:57:58.458455 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:57:58.649805 kubelet[1961]: I0412 18:57:58.649699 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:58.649805 kubelet[1961]: I0412 18:57:58.649735 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:58.649805 kubelet[1961]: I0412 18:57:58.649756 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:58.649805 kubelet[1961]: I0412 18:57:58.649775 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:58.649805 kubelet[1961]: I0412 18:57:58.649799 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9e08da5b52f1db9fbf9c01a698b7e66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f9e08da5b52f1db9fbf9c01a698b7e66\") " pod="kube-system/kube-apiserver-localhost" Apr 12 18:57:58.650046 kubelet[1961]: I0412 18:57:58.649816 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:58.650046 kubelet[1961]: I0412 18:57:58.649834 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:58.650046 kubelet[1961]: I0412 18:57:58.649852 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f7d78630cba827a770c684e2dbe6ce6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2f7d78630cba827a770c684e2dbe6ce6\") " pod="kube-system/kube-scheduler-localhost" Apr 12 18:57:58.650046 kubelet[1961]: I0412 18:57:58.649875 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23ea803843027eb81926493bf073366-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b23ea803843027eb81926493bf073366\") " pod="kube-system/kube-controller-manager-localhost" Apr 12 18:57:58.766404 kubelet[1961]: E0412 18:57:58.766365 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:58.766642 kubelet[1961]: E0412 18:57:58.766619 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:58.767244 kubelet[1961]: E0412 18:57:58.767228 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:58.862007 sudo[1991]: pam_unix(sudo:session): session closed for user root Apr 12 18:57:59.333160 kubelet[1961]: I0412 18:57:59.333119 1961 apiserver.go:52] "Watching apiserver" Apr 12 18:57:59.348230 kubelet[1961]: I0412 18:57:59.348195 1961 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:57:59.354335 kubelet[1961]: I0412 18:57:59.354299 1961 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:57:59.367928 kubelet[1961]: E0412 18:57:59.367903 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:59.368047 kubelet[1961]: E0412 18:57:59.368021 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:59.368272 kubelet[1961]: E0412 18:57:59.368247 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:57:59.380578 kubelet[1961]: I0412 18:57:59.380557 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.380524214 podCreationTimestamp="2024-04-12 18:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:57:59.380413141 +0000 UTC m=+1.102299794" watchObservedRunningTime="2024-04-12 18:57:59.380524214 +0000 UTC m=+1.102410857" Apr 12 18:57:59.389829 kubelet[1961]: I0412 18:57:59.389797 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.389760933 podCreationTimestamp="2024-04-12 18:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:57:59.385332127 +0000 UTC m=+1.107218780" watchObservedRunningTime="2024-04-12 18:57:59.389760933 +0000 UTC m=+1.111647586" Apr 12 18:57:59.394366 kubelet[1961]: I0412 18:57:59.394333 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3943012430000001 podCreationTimestamp="2024-04-12 18:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:57:59.389985924 +0000 UTC m=+1.111872577" watchObservedRunningTime="2024-04-12 18:57:59.394301243 +0000 UTC m=+1.116187896" Apr 12 18:57:59.888063 sudo[1216]: pam_unix(sudo:session): session closed for user root Apr 12 18:57:59.889404 sshd[1213]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:59.891727 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:46060.service: Deactivated successfully. Apr 12 18:57:59.892401 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:57:59.892557 systemd[1]: session-5.scope: Consumed 3.858s CPU time. Apr 12 18:57:59.893136 systemd-logind[1112]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:57:59.893813 systemd-logind[1112]: Removed session 5. Apr 12 18:58:00.368998 kubelet[1961]: E0412 18:58:00.368966 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:00.369323 kubelet[1961]: E0412 18:58:00.369107 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:01.370039 kubelet[1961]: E0412 18:58:01.370003 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:06.887374 kubelet[1961]: E0412 18:58:06.887343 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:07.378237 kubelet[1961]: E0412 18:58:07.378194 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:09.045486 update_engine[1113]: I0412 18:58:09.045437 1113 update_attempter.cc:509] Updating boot flags... Apr 12 18:58:09.536754 kubelet[1961]: E0412 18:58:09.536731 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:10.914365 kubelet[1961]: E0412 18:58:10.914330 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:11.383178 kubelet[1961]: E0412 18:58:11.383147 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:12.881439 kubelet[1961]: I0412 18:58:12.881414 1961 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:58:12.881932 env[1126]: time="2024-04-12T18:58:12.881847264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:58:12.882199 kubelet[1961]: I0412 18:58:12.882177 1961 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:58:12.930415 kubelet[1961]: I0412 18:58:12.930368 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:58:12.933294 kubelet[1961]: I0412 18:58:12.933259 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:58:12.937080 systemd[1]: Created slice kubepods-besteffort-pod1e432761_8410_4e0d_84af_4af0d81b7f36.slice. Apr 12 18:58:12.945519 systemd[1]: Created slice kubepods-burstable-pod3ed6edd5_9a19_4a45_b47e_2d30e1511fe5.slice. Apr 12 18:58:12.988235 kubelet[1961]: I0412 18:58:12.988202 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:58:12.993779 systemd[1]: Created slice kubepods-besteffort-pod8a0d6105_f6ba_407d_8a45_f80f3006a912.slice. Apr 12 18:58:13.025785 kubelet[1961]: I0412 18:58:13.025735 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a0d6105-f6ba-407d-8a45-f80f3006a912-cilium-config-path\") pod \"cilium-operator-574c4bb98d-v7468\" (UID: \"8a0d6105-f6ba-407d-8a45-f80f3006a912\") " pod="kube-system/cilium-operator-574c4bb98d-v7468" Apr 12 18:58:13.025785 kubelet[1961]: I0412 18:58:13.025792 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-xtables-lock\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025820 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcxf\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-kube-api-access-mlcxf\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025841 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-run\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025864 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-cgroup\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025887 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cni-path\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025908 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-net\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026002 kubelet[1961]: I0412 18:58:13.025930 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-clustermesh-secrets\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.025949 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-bpf-maps\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.025973 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hostproc\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.025998 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hubble-tls\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.026022 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e432761-8410-4e0d-84af-4af0d81b7f36-kube-proxy\") pod \"kube-proxy-jm6c9\" (UID: \"1e432761-8410-4e0d-84af-4af0d81b7f36\") " pod="kube-system/kube-proxy-jm6c9" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.026047 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e432761-8410-4e0d-84af-4af0d81b7f36-lib-modules\") pod \"kube-proxy-jm6c9\" (UID: \"1e432761-8410-4e0d-84af-4af0d81b7f36\") " pod="kube-system/kube-proxy-jm6c9" Apr 12 18:58:13.026226 kubelet[1961]: I0412 18:58:13.026072 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-etc-cni-netd\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026515 kubelet[1961]: I0412 18:58:13.026096 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-config-path\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026515 kubelet[1961]: I0412 18:58:13.026123 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57dd\" (UniqueName: \"kubernetes.io/projected/1e432761-8410-4e0d-84af-4af0d81b7f36-kube-api-access-m57dd\") pod \"kube-proxy-jm6c9\" (UID: \"1e432761-8410-4e0d-84af-4af0d81b7f36\") " pod="kube-system/kube-proxy-jm6c9" Apr 12 18:58:13.026515 kubelet[1961]: I0412 18:58:13.026156 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-lib-modules\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026515 kubelet[1961]: I0412 18:58:13.026183 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-kernel\") pod \"cilium-lkt7j\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " pod="kube-system/cilium-lkt7j" Apr 12 18:58:13.026515 kubelet[1961]: I0412 18:58:13.026225 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs2v\" (UniqueName: \"kubernetes.io/projected/8a0d6105-f6ba-407d-8a45-f80f3006a912-kube-api-access-gjs2v\") pod \"cilium-operator-574c4bb98d-v7468\" (UID: \"8a0d6105-f6ba-407d-8a45-f80f3006a912\") " pod="kube-system/cilium-operator-574c4bb98d-v7468" Apr 12 18:58:13.026690 kubelet[1961]: I0412 18:58:13.026250 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e432761-8410-4e0d-84af-4af0d81b7f36-xtables-lock\") pod \"kube-proxy-jm6c9\" (UID: \"1e432761-8410-4e0d-84af-4af0d81b7f36\") " pod="kube-system/kube-proxy-jm6c9" Apr 12 18:58:13.244264 kubelet[1961]: E0412 18:58:13.244232 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.244892 env[1126]: time="2024-04-12T18:58:13.244844804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm6c9,Uid:1e432761-8410-4e0d-84af-4af0d81b7f36,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:13.247912 kubelet[1961]: E0412 18:58:13.247900 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.248526 env[1126]: time="2024-04-12T18:58:13.248178071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkt7j,Uid:3ed6edd5-9a19-4a45-b47e-2d30e1511fe5,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:13.263816 env[1126]: time="2024-04-12T18:58:13.263567038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:13.263816 env[1126]: time="2024-04-12T18:58:13.263636860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:13.263816 env[1126]: time="2024-04-12T18:58:13.263650196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:13.263976 env[1126]: time="2024-04-12T18:58:13.263806471Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4eec5b4fedd712d2b3cae8b8c4c671364ed099e0739ea2e83e40bd92879707 pid=2070 runtime=io.containerd.runc.v2 Apr 12 18:58:13.274984 systemd[1]: Started cri-containerd-da4eec5b4fedd712d2b3cae8b8c4c671364ed099e0739ea2e83e40bd92879707.scope. Apr 12 18:58:13.276645 env[1126]: time="2024-04-12T18:58:13.276279141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:13.276645 env[1126]: time="2024-04-12T18:58:13.276374131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:13.276645 env[1126]: time="2024-04-12T18:58:13.276414166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:13.277537 env[1126]: time="2024-04-12T18:58:13.276677505Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956 pid=2095 runtime=io.containerd.runc.v2 Apr 12 18:58:13.288312 systemd[1]: Started cri-containerd-1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956.scope. Apr 12 18:58:13.296576 kubelet[1961]: E0412 18:58:13.296541 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.300632 env[1126]: time="2024-04-12T18:58:13.300588367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-v7468,Uid:8a0d6105-f6ba-407d-8a45-f80f3006a912,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:13.307753 env[1126]: time="2024-04-12T18:58:13.307691286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm6c9,Uid:1e432761-8410-4e0d-84af-4af0d81b7f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"da4eec5b4fedd712d2b3cae8b8c4c671364ed099e0739ea2e83e40bd92879707\"" Apr 12 18:58:13.308635 kubelet[1961]: E0412 18:58:13.308605 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.317136 env[1126]: time="2024-04-12T18:58:13.317083337Z" level=info msg="CreateContainer within sandbox \"da4eec5b4fedd712d2b3cae8b8c4c671364ed099e0739ea2e83e40bd92879707\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:58:13.321024 env[1126]: time="2024-04-12T18:58:13.320890741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkt7j,Uid:3ed6edd5-9a19-4a45-b47e-2d30e1511fe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\"" Apr 12 18:58:13.321332 kubelet[1961]: E0412 18:58:13.321307 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.324096 env[1126]: time="2024-04-12T18:58:13.324047715Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:58:13.335755 env[1126]: time="2024-04-12T18:58:13.333083020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:13.335755 env[1126]: time="2024-04-12T18:58:13.333155477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:13.335755 env[1126]: time="2024-04-12T18:58:13.333169373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:13.335755 env[1126]: time="2024-04-12T18:58:13.333419496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da pid=2150 runtime=io.containerd.runc.v2 Apr 12 18:58:13.340737 env[1126]: time="2024-04-12T18:58:13.340697588Z" level=info msg="CreateContainer within sandbox \"da4eec5b4fedd712d2b3cae8b8c4c671364ed099e0739ea2e83e40bd92879707\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0f7381f1587c58a79f97b5954180fc2ded1a3021321ad476757bac7a05f0598d\"" Apr 12 18:58:13.341351 env[1126]: time="2024-04-12T18:58:13.341300399Z" level=info msg="StartContainer for \"0f7381f1587c58a79f97b5954180fc2ded1a3021321ad476757bac7a05f0598d\"" Apr 12 18:58:13.349509 systemd[1]: Started cri-containerd-d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da.scope. Apr 12 18:58:13.366522 systemd[1]: Started cri-containerd-0f7381f1587c58a79f97b5954180fc2ded1a3021321ad476757bac7a05f0598d.scope. Apr 12 18:58:13.394092 env[1126]: time="2024-04-12T18:58:13.392870605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-v7468,Uid:8a0d6105-f6ba-407d-8a45-f80f3006a912,Namespace:kube-system,Attempt:0,} returns sandbox id \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\"" Apr 12 18:58:13.394253 kubelet[1961]: E0412 18:58:13.393365 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:13.404681 env[1126]: time="2024-04-12T18:58:13.404625226Z" level=info msg="StartContainer for \"0f7381f1587c58a79f97b5954180fc2ded1a3021321ad476757bac7a05f0598d\" returns successfully" Apr 12 18:58:14.392241 kubelet[1961]: E0412 18:58:14.392211 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:15.395648 kubelet[1961]: E0412 18:58:15.395603 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:25.508775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578017600.mount: Deactivated successfully. Apr 12 18:58:25.974937 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:46450.service. Apr 12 18:58:26.016602 sshd[2340]: Accepted publickey for core from 10.0.0.1 port 46450 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:26.017912 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:26.021413 systemd-logind[1112]: New session 6 of user core. Apr 12 18:58:26.022351 systemd[1]: Started session-6.scope. Apr 12 18:58:26.132489 sshd[2340]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:26.134223 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:46450.service: Deactivated successfully. Apr 12 18:58:26.134840 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:58:26.135300 systemd-logind[1112]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:58:26.135930 systemd-logind[1112]: Removed session 6. Apr 12 18:58:29.419113 env[1126]: time="2024-04-12T18:58:29.419054401Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:29.421044 env[1126]: time="2024-04-12T18:58:29.421000994Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:29.422676 env[1126]: time="2024-04-12T18:58:29.422634758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:29.423283 env[1126]: time="2024-04-12T18:58:29.423244075Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:58:29.424423 env[1126]: time="2024-04-12T18:58:29.424366566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:58:29.425120 env[1126]: time="2024-04-12T18:58:29.425091541Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:58:29.436836 env[1126]: time="2024-04-12T18:58:29.436796975Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\"" Apr 12 18:58:29.437319 env[1126]: time="2024-04-12T18:58:29.437292006Z" level=info msg="StartContainer for \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\"" Apr 12 18:58:29.451286 systemd[1]: Started cri-containerd-a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e.scope. Apr 12 18:58:29.485180 systemd[1]: cri-containerd-a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e.scope: Deactivated successfully. Apr 12 18:58:29.640692 env[1126]: time="2024-04-12T18:58:29.640632552Z" level=info msg="StartContainer for \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\" returns successfully" Apr 12 18:58:30.066948 env[1126]: time="2024-04-12T18:58:30.066880418Z" level=info msg="shim disconnected" id=a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e Apr 12 18:58:30.066948 env[1126]: time="2024-04-12T18:58:30.066936363Z" level=warning msg="cleaning up after shim disconnected" id=a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e namespace=k8s.io Apr 12 18:58:30.066948 env[1126]: time="2024-04-12T18:58:30.066945240Z" level=info msg="cleaning up dead shim" Apr 12 18:58:30.072871 env[1126]: time="2024-04-12T18:58:30.072846021Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2403 runtime=io.containerd.runc.v2\n" Apr 12 18:58:30.417089 kubelet[1961]: E0412 18:58:30.416798 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:30.419418 env[1126]: time="2024-04-12T18:58:30.418787742Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:58:30.429038 kubelet[1961]: I0412 18:58:30.429000 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jm6c9" podStartSLOduration=18.42896687 podCreationTimestamp="2024-04-12 18:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:58:14.398694996 +0000 UTC m=+16.120581649" watchObservedRunningTime="2024-04-12 18:58:30.42896687 +0000 UTC m=+32.150853523" Apr 12 18:58:30.433553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e-rootfs.mount: Deactivated successfully. Apr 12 18:58:30.434904 env[1126]: time="2024-04-12T18:58:30.434846472Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\"" Apr 12 18:58:30.435412 env[1126]: time="2024-04-12T18:58:30.435368875Z" level=info msg="StartContainer for \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\"" Apr 12 18:58:30.452480 systemd[1]: run-containerd-runc-k8s.io-13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f-runc.okhORV.mount: Deactivated successfully. Apr 12 18:58:30.455343 systemd[1]: Started cri-containerd-13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f.scope. Apr 12 18:58:30.477692 env[1126]: time="2024-04-12T18:58:30.477646205Z" level=info msg="StartContainer for \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\" returns successfully" Apr 12 18:58:30.486975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:58:30.487233 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:58:30.487471 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:58:30.489025 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:58:30.490605 systemd[1]: cri-containerd-13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f.scope: Deactivated successfully. Apr 12 18:58:30.500732 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:58:30.513133 env[1126]: time="2024-04-12T18:58:30.513070796Z" level=info msg="shim disconnected" id=13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f Apr 12 18:58:30.513133 env[1126]: time="2024-04-12T18:58:30.513125058Z" level=warning msg="cleaning up after shim disconnected" id=13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f namespace=k8s.io Apr 12 18:58:30.513284 env[1126]: time="2024-04-12T18:58:30.513136159Z" level=info msg="cleaning up dead shim" Apr 12 18:58:30.519406 env[1126]: time="2024-04-12T18:58:30.519361630Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2466 runtime=io.containerd.runc.v2\n" Apr 12 18:58:31.135730 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:45242.service. Apr 12 18:58:31.171220 sshd[2479]: Accepted publickey for core from 10.0.0.1 port 45242 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:31.172305 sshd[2479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:31.175584 systemd-logind[1112]: New session 7 of user core. Apr 12 18:58:31.176438 systemd[1]: Started session-7.scope. Apr 12 18:58:31.281615 sshd[2479]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:31.284133 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:45242.service: Deactivated successfully. Apr 12 18:58:31.284892 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:58:31.285938 systemd-logind[1112]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:58:31.286705 systemd-logind[1112]: Removed session 7. Apr 12 18:58:31.419210 kubelet[1961]: E0412 18:58:31.419080 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:31.422022 env[1126]: time="2024-04-12T18:58:31.421952969Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:58:31.433681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f-rootfs.mount: Deactivated successfully. Apr 12 18:58:31.926599 env[1126]: time="2024-04-12T18:58:31.926537804Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\"" Apr 12 18:58:31.931519 env[1126]: time="2024-04-12T18:58:31.931488346Z" level=info msg="StartContainer for \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\"" Apr 12 18:58:31.937595 env[1126]: time="2024-04-12T18:58:31.937540359Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:31.942006 env[1126]: time="2024-04-12T18:58:31.941985520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:31.944172 env[1126]: time="2024-04-12T18:58:31.944140122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:58:31.944564 env[1126]: time="2024-04-12T18:58:31.944525577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:58:31.948164 env[1126]: time="2024-04-12T18:58:31.948116591Z" level=info msg="CreateContainer within sandbox \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:58:31.948954 systemd[1]: Started cri-containerd-fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6.scope. Apr 12 18:58:31.976680 systemd[1]: cri-containerd-fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6.scope: Deactivated successfully. Apr 12 18:58:32.126768 env[1126]: time="2024-04-12T18:58:32.126691463Z" level=info msg="CreateContainer within sandbox \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\"" Apr 12 18:58:32.127182 env[1126]: time="2024-04-12T18:58:32.127131811Z" level=info msg="StartContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\"" Apr 12 18:58:32.140305 systemd[1]: Started cri-containerd-6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec.scope. Apr 12 18:58:32.174150 env[1126]: time="2024-04-12T18:58:32.174095036Z" level=info msg="StartContainer for \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\" returns successfully" Apr 12 18:58:32.237320 env[1126]: time="2024-04-12T18:58:32.237198924Z" level=info msg="StartContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" returns successfully" Apr 12 18:58:32.319518 env[1126]: time="2024-04-12T18:58:32.319466160Z" level=info msg="shim disconnected" id=fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6 Apr 12 18:58:32.319518 env[1126]: time="2024-04-12T18:58:32.319511916Z" level=warning msg="cleaning up after shim disconnected" id=fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6 namespace=k8s.io Apr 12 18:58:32.319518 env[1126]: time="2024-04-12T18:58:32.319520191Z" level=info msg="cleaning up dead shim" Apr 12 18:58:32.330134 env[1126]: time="2024-04-12T18:58:32.330080000Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2575 runtime=io.containerd.runc.v2\n" Apr 12 18:58:32.422585 kubelet[1961]: E0412 18:58:32.422558 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:32.424291 kubelet[1961]: E0412 18:58:32.424266 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:32.424831 env[1126]: time="2024-04-12T18:58:32.424803569Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:58:32.434409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6-rootfs.mount: Deactivated successfully. Apr 12 18:58:32.442749 kubelet[1961]: I0412 18:58:32.442709 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-v7468" podStartSLOduration=1.892414016 podCreationTimestamp="2024-04-12 18:58:12 +0000 UTC" firstStartedPulling="2024-04-12 18:58:13.394498656 +0000 UTC m=+15.116385309" lastFinishedPulling="2024-04-12 18:58:31.944762302 +0000 UTC m=+33.666649136" observedRunningTime="2024-04-12 18:58:32.442473559 +0000 UTC m=+34.164360212" watchObservedRunningTime="2024-04-12 18:58:32.442677843 +0000 UTC m=+34.164564496" Apr 12 18:58:32.443593 env[1126]: time="2024-04-12T18:58:32.443547668Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\"" Apr 12 18:58:32.444090 env[1126]: time="2024-04-12T18:58:32.444053028Z" level=info msg="StartContainer for \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\"" Apr 12 18:58:32.460131 systemd[1]: Started cri-containerd-8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738.scope. Apr 12 18:58:32.479156 systemd[1]: cri-containerd-8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738.scope: Deactivated successfully. Apr 12 18:58:32.480585 env[1126]: time="2024-04-12T18:58:32.480547026Z" level=info msg="StartContainer for \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\" returns successfully" Apr 12 18:58:32.498317 env[1126]: time="2024-04-12T18:58:32.498265937Z" level=info msg="shim disconnected" id=8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738 Apr 12 18:58:32.498317 env[1126]: time="2024-04-12T18:58:32.498309419Z" level=warning msg="cleaning up after shim disconnected" id=8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738 namespace=k8s.io Apr 12 18:58:32.498317 env[1126]: time="2024-04-12T18:58:32.498317895Z" level=info msg="cleaning up dead shim" Apr 12 18:58:32.510994 env[1126]: time="2024-04-12T18:58:32.510554416Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2626 runtime=io.containerd.runc.v2\n" Apr 12 18:58:33.427902 kubelet[1961]: E0412 18:58:33.427587 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:33.427902 kubelet[1961]: E0412 18:58:33.427826 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:33.430299 env[1126]: time="2024-04-12T18:58:33.429528571Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:58:33.433711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738-rootfs.mount: Deactivated successfully. Apr 12 18:58:33.445167 env[1126]: time="2024-04-12T18:58:33.445110660Z" level=info msg="CreateContainer within sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\"" Apr 12 18:58:33.445592 env[1126]: time="2024-04-12T18:58:33.445550527Z" level=info msg="StartContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\"" Apr 12 18:58:33.460475 systemd[1]: Started cri-containerd-1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75.scope. Apr 12 18:58:33.484376 env[1126]: time="2024-04-12T18:58:33.484331694Z" level=info msg="StartContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" returns successfully" Apr 12 18:58:33.633885 kubelet[1961]: I0412 18:58:33.633840 1961 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:58:33.647671 kubelet[1961]: I0412 18:58:33.647638 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:58:33.651373 kubelet[1961]: I0412 18:58:33.651009 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:58:33.653320 systemd[1]: Created slice kubepods-burstable-podd74aec7f_9c8d_449b_b733_042641f933d6.slice. Apr 12 18:58:33.659744 systemd[1]: Created slice kubepods-burstable-pod5aa0b02e_85ac_46c2_be11_06a56f71e3d1.slice. Apr 12 18:58:33.755340 kubelet[1961]: I0412 18:58:33.755306 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa0b02e-85ac-46c2-be11-06a56f71e3d1-config-volume\") pod \"coredns-5d78c9869d-z4b7f\" (UID: \"5aa0b02e-85ac-46c2-be11-06a56f71e3d1\") " pod="kube-system/coredns-5d78c9869d-z4b7f" Apr 12 18:58:33.755340 kubelet[1961]: I0412 18:58:33.755348 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcjk\" (UniqueName: \"kubernetes.io/projected/5aa0b02e-85ac-46c2-be11-06a56f71e3d1-kube-api-access-lfcjk\") pod \"coredns-5d78c9869d-z4b7f\" (UID: \"5aa0b02e-85ac-46c2-be11-06a56f71e3d1\") " pod="kube-system/coredns-5d78c9869d-z4b7f" Apr 12 18:58:33.755498 kubelet[1961]: I0412 18:58:33.755369 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdw7\" (UniqueName: \"kubernetes.io/projected/d74aec7f-9c8d-449b-b733-042641f933d6-kube-api-access-4pdw7\") pod \"coredns-5d78c9869d-xwhmg\" (UID: \"d74aec7f-9c8d-449b-b733-042641f933d6\") " pod="kube-system/coredns-5d78c9869d-xwhmg" Apr 12 18:58:33.755498 kubelet[1961]: I0412 18:58:33.755400 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d74aec7f-9c8d-449b-b733-042641f933d6-config-volume\") pod \"coredns-5d78c9869d-xwhmg\" (UID: \"d74aec7f-9c8d-449b-b733-042641f933d6\") " pod="kube-system/coredns-5d78c9869d-xwhmg" Apr 12 18:58:33.956691 kubelet[1961]: E0412 18:58:33.956659 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:33.957212 env[1126]: time="2024-04-12T18:58:33.957136362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xwhmg,Uid:d74aec7f-9c8d-449b-b733-042641f933d6,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:33.962136 kubelet[1961]: E0412 18:58:33.962111 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:33.962517 env[1126]: time="2024-04-12T18:58:33.962458799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-z4b7f,Uid:5aa0b02e-85ac-46c2-be11-06a56f71e3d1,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:34.440915 kubelet[1961]: E0412 18:58:34.437067 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:34.439604 systemd[1]: run-containerd-runc-k8s.io-1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75-runc.QJ8jlN.mount: Deactivated successfully. Apr 12 18:58:34.452441 kubelet[1961]: I0412 18:58:34.452376 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lkt7j" podStartSLOduration=6.350547585 podCreationTimestamp="2024-04-12 18:58:12 +0000 UTC" firstStartedPulling="2024-04-12 18:58:13.322180762 +0000 UTC m=+15.044067415" lastFinishedPulling="2024-04-12 18:58:29.423969509 +0000 UTC m=+31.145856162" observedRunningTime="2024-04-12 18:58:34.451256813 +0000 UTC m=+36.173143466" watchObservedRunningTime="2024-04-12 18:58:34.452336332 +0000 UTC m=+36.174222995" Apr 12 18:58:35.438459 kubelet[1961]: E0412 18:58:35.438431 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:36.228834 systemd-networkd[1021]: cilium_host: Link UP Apr 12 18:58:36.228935 systemd-networkd[1021]: cilium_net: Link UP Apr 12 18:58:36.229973 systemd-networkd[1021]: cilium_net: Gained carrier Apr 12 18:58:36.231055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:58:36.231157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:58:36.231186 systemd-networkd[1021]: cilium_host: Gained carrier Apr 12 18:58:36.284956 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:45258.service. Apr 12 18:58:36.305328 systemd-networkd[1021]: cilium_vxlan: Link UP Apr 12 18:58:36.305336 systemd-networkd[1021]: cilium_vxlan: Gained carrier Apr 12 18:58:36.319987 sshd[2876]: Accepted publickey for core from 10.0.0.1 port 45258 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:36.321196 sshd[2876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:36.324458 systemd-logind[1112]: New session 8 of user core. Apr 12 18:58:36.325213 systemd[1]: Started session-8.scope. Apr 12 18:58:36.431553 sshd[2876]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:36.434261 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:45258.service: Deactivated successfully. Apr 12 18:58:36.434913 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:58:36.435459 systemd-logind[1112]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:58:36.436110 systemd-logind[1112]: Removed session 8. Apr 12 18:58:36.440575 kubelet[1961]: E0412 18:58:36.440555 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:36.484412 kernel: NET: Registered PF_ALG protocol family Apr 12 18:58:36.712515 systemd-networkd[1021]: cilium_net: Gained IPv6LL Apr 12 18:58:36.968946 systemd-networkd[1021]: lxc_health: Link UP Apr 12 18:58:36.979945 systemd-networkd[1021]: lxc_health: Gained carrier Apr 12 18:58:36.980410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:58:37.096526 systemd-networkd[1021]: cilium_host: Gained IPv6LL Apr 12 18:58:37.442343 kubelet[1961]: E0412 18:58:37.442298 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:37.493913 systemd-networkd[1021]: lxc793829df521f: Link UP Apr 12 18:58:37.499415 kernel: eth0: renamed from tmp5319f Apr 12 18:58:37.508116 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:58:37.508162 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc793829df521f: link becomes ready Apr 12 18:58:37.508250 systemd-networkd[1021]: lxc793829df521f: Gained carrier Apr 12 18:58:37.517164 systemd-networkd[1021]: lxc1092a395a24b: Link UP Apr 12 18:58:37.524411 kernel: eth0: renamed from tmp9e88c Apr 12 18:58:37.531413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1092a395a24b: link becomes ready Apr 12 18:58:37.530950 systemd-networkd[1021]: lxc1092a395a24b: Gained carrier Apr 12 18:58:38.248541 systemd-networkd[1021]: cilium_vxlan: Gained IPv6LL Apr 12 18:58:38.443310 kubelet[1961]: E0412 18:58:38.443282 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:38.504605 systemd-networkd[1021]: lxc_health: Gained IPv6LL Apr 12 18:58:38.632782 systemd-networkd[1021]: lxc1092a395a24b: Gained IPv6LL Apr 12 18:58:38.888512 systemd-networkd[1021]: lxc793829df521f: Gained IPv6LL Apr 12 18:58:39.444798 kubelet[1961]: E0412 18:58:39.444763 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:40.756298 env[1126]: time="2024-04-12T18:58:40.756221485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:40.756298 env[1126]: time="2024-04-12T18:58:40.756266770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:40.756298 env[1126]: time="2024-04-12T18:58:40.756277520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:40.756651 env[1126]: time="2024-04-12T18:58:40.756474540Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e88ce3c0ed6c9cd691f41fcc08f0bf97a4ca56a8f5486bf4db3e6de8331982b pid=3211 runtime=io.containerd.runc.v2 Apr 12 18:58:40.757377 env[1126]: time="2024-04-12T18:58:40.757308227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:40.758445 env[1126]: time="2024-04-12T18:58:40.757569878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:40.758445 env[1126]: time="2024-04-12T18:58:40.757589545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:40.758445 env[1126]: time="2024-04-12T18:58:40.757733335Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5319f0d1bdd7ecdf3ef8de887d66d01f48882d343ad9f64db0f89665cbfcaab2 pid=3219 runtime=io.containerd.runc.v2 Apr 12 18:58:40.767962 systemd[1]: Started cri-containerd-5319f0d1bdd7ecdf3ef8de887d66d01f48882d343ad9f64db0f89665cbfcaab2.scope. Apr 12 18:58:40.777629 systemd[1]: Started cri-containerd-9e88ce3c0ed6c9cd691f41fcc08f0bf97a4ca56a8f5486bf4db3e6de8331982b.scope. Apr 12 18:58:40.784489 systemd-resolved[1064]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:58:40.788928 systemd-resolved[1064]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 12 18:58:40.809103 env[1126]: time="2024-04-12T18:58:40.809063133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xwhmg,Uid:d74aec7f-9c8d-449b-b733-042641f933d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5319f0d1bdd7ecdf3ef8de887d66d01f48882d343ad9f64db0f89665cbfcaab2\"" Apr 12 18:58:40.809635 kubelet[1961]: E0412 18:58:40.809617 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:40.812640 env[1126]: time="2024-04-12T18:58:40.812601540Z" level=info msg="CreateContainer within sandbox \"5319f0d1bdd7ecdf3ef8de887d66d01f48882d343ad9f64db0f89665cbfcaab2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:58:40.816379 env[1126]: time="2024-04-12T18:58:40.816350943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-z4b7f,Uid:5aa0b02e-85ac-46c2-be11-06a56f71e3d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e88ce3c0ed6c9cd691f41fcc08f0bf97a4ca56a8f5486bf4db3e6de8331982b\"" Apr 12 18:58:40.817206 kubelet[1961]: E0412 18:58:40.816922 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:40.819789 env[1126]: time="2024-04-12T18:58:40.819757172Z" level=info msg="CreateContainer within sandbox \"9e88ce3c0ed6c9cd691f41fcc08f0bf97a4ca56a8f5486bf4db3e6de8331982b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:58:40.833348 env[1126]: time="2024-04-12T18:58:40.833314401Z" level=info msg="CreateContainer within sandbox \"5319f0d1bdd7ecdf3ef8de887d66d01f48882d343ad9f64db0f89665cbfcaab2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43db8a59d962d2d8d661277bbd4e75bccda209043234f74a5ac958e9c37ff038\"" Apr 12 18:58:40.834032 env[1126]: time="2024-04-12T18:58:40.833980852Z" level=info msg="StartContainer for \"43db8a59d962d2d8d661277bbd4e75bccda209043234f74a5ac958e9c37ff038\"" Apr 12 18:58:40.839905 env[1126]: time="2024-04-12T18:58:40.839870737Z" level=info msg="CreateContainer within sandbox \"9e88ce3c0ed6c9cd691f41fcc08f0bf97a4ca56a8f5486bf4db3e6de8331982b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc330ce27aa9b48d44c36c6a8958eea3c2b9bf1c91d34015cb7787df3257da06\"" Apr 12 18:58:40.840277 env[1126]: time="2024-04-12T18:58:40.840248526Z" level=info msg="StartContainer for \"dc330ce27aa9b48d44c36c6a8958eea3c2b9bf1c91d34015cb7787df3257da06\"" Apr 12 18:58:40.849541 systemd[1]: Started cri-containerd-43db8a59d962d2d8d661277bbd4e75bccda209043234f74a5ac958e9c37ff038.scope. Apr 12 18:58:40.862934 systemd[1]: Started cri-containerd-dc330ce27aa9b48d44c36c6a8958eea3c2b9bf1c91d34015cb7787df3257da06.scope. Apr 12 18:58:40.874921 env[1126]: time="2024-04-12T18:58:40.874881287Z" level=info msg="StartContainer for \"43db8a59d962d2d8d661277bbd4e75bccda209043234f74a5ac958e9c37ff038\" returns successfully" Apr 12 18:58:40.889164 env[1126]: time="2024-04-12T18:58:40.889120206Z" level=info msg="StartContainer for \"dc330ce27aa9b48d44c36c6a8958eea3c2b9bf1c91d34015cb7787df3257da06\" returns successfully" Apr 12 18:58:41.435725 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:40568.service. Apr 12 18:58:41.448131 kubelet[1961]: E0412 18:58:41.448105 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:41.449411 kubelet[1961]: E0412 18:58:41.449379 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:41.455956 kubelet[1961]: I0412 18:58:41.455928 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-z4b7f" podStartSLOduration=29.455895981 podCreationTimestamp="2024-04-12 18:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:58:41.454938702 +0000 UTC m=+43.176825355" watchObservedRunningTime="2024-04-12 18:58:41.455895981 +0000 UTC m=+43.177782624" Apr 12 18:58:41.472288 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 40568 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:41.473580 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:41.478484 systemd[1]: Started session-9.scope. Apr 12 18:58:41.479458 systemd-logind[1112]: New session 9 of user core. Apr 12 18:58:41.584072 sshd[3366]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:41.586295 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:40568.service: Deactivated successfully. Apr 12 18:58:41.587002 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:58:41.587595 systemd-logind[1112]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:58:41.588216 systemd-logind[1112]: Removed session 9. Apr 12 18:58:42.450578 kubelet[1961]: E0412 18:58:42.450553 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:42.450860 kubelet[1961]: E0412 18:58:42.450686 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:43.451961 kubelet[1961]: E0412 18:58:43.451930 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:43.452257 kubelet[1961]: E0412 18:58:43.452129 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:58:46.587729 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:40572.service. Apr 12 18:58:46.621121 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 40572 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:46.622082 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:46.625191 systemd-logind[1112]: New session 10 of user core. Apr 12 18:58:46.626183 systemd[1]: Started session-10.scope. Apr 12 18:58:46.730195 sshd[3390]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:46.733749 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:40572.service: Deactivated successfully. Apr 12 18:58:46.734558 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:58:46.735167 systemd-logind[1112]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:58:46.736267 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:40588.service. Apr 12 18:58:46.737045 systemd-logind[1112]: Removed session 10. Apr 12 18:58:46.772290 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 40588 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:46.773291 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:46.776684 systemd-logind[1112]: New session 11 of user core. Apr 12 18:58:46.777468 systemd[1]: Started session-11.scope. Apr 12 18:58:47.385208 sshd[3404]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:47.387015 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:40600.service. Apr 12 18:58:47.407002 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:40588.service: Deactivated successfully. Apr 12 18:58:47.407687 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:58:47.408379 systemd-logind[1112]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:58:47.409110 systemd-logind[1112]: Removed session 11. Apr 12 18:58:47.423514 sshd[3415]: Accepted publickey for core from 10.0.0.1 port 40600 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:47.424611 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:47.427383 systemd-logind[1112]: New session 12 of user core. Apr 12 18:58:47.428059 systemd[1]: Started session-12.scope. Apr 12 18:58:47.526881 sshd[3415]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:47.529117 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:40600.service: Deactivated successfully. Apr 12 18:58:47.529781 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:58:47.530203 systemd-logind[1112]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:58:47.530800 systemd-logind[1112]: Removed session 12. Apr 12 18:58:52.531062 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:55238.service. Apr 12 18:58:52.564080 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 55238 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:52.564876 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:52.567596 systemd-logind[1112]: New session 13 of user core. Apr 12 18:58:52.568503 systemd[1]: Started session-13.scope. Apr 12 18:58:52.661836 sshd[3429]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:52.663498 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:55238.service: Deactivated successfully. Apr 12 18:58:52.664094 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:58:52.664614 systemd-logind[1112]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:58:52.665209 systemd-logind[1112]: Removed session 13. Apr 12 18:58:57.665964 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:55242.service. Apr 12 18:58:57.700301 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:57.701304 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:57.704188 systemd-logind[1112]: New session 14 of user core. Apr 12 18:58:57.705132 systemd[1]: Started session-14.scope. Apr 12 18:58:57.801622 sshd[3442]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:57.804419 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:55242.service: Deactivated successfully. Apr 12 18:58:57.805020 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:58:57.805649 systemd-logind[1112]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:58:57.806679 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:55248.service. Apr 12 18:58:57.807495 systemd-logind[1112]: Removed session 14. Apr 12 18:58:57.840718 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 55248 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:57.841679 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:57.844605 systemd-logind[1112]: New session 15 of user core. Apr 12 18:58:57.845433 systemd[1]: Started session-15.scope. Apr 12 18:58:58.000445 sshd[3455]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:58.003492 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:55248.service: Deactivated successfully. Apr 12 18:58:58.004154 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:58:58.004863 systemd-logind[1112]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:58:58.005942 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:55258.service. Apr 12 18:58:58.006689 systemd-logind[1112]: Removed session 15. Apr 12 18:58:58.041771 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:58.042836 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:58.045918 systemd-logind[1112]: New session 16 of user core. Apr 12 18:58:58.046714 systemd[1]: Started session-16.scope. Apr 12 18:58:58.875053 sshd[3466]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:58.879023 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:55258.service: Deactivated successfully. Apr 12 18:58:58.879591 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:58:58.881812 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:55264.service. Apr 12 18:58:58.882664 systemd-logind[1112]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:58:58.885409 systemd-logind[1112]: Removed session 16. Apr 12 18:58:58.917338 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:58.918565 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:58.921806 systemd-logind[1112]: New session 17 of user core. Apr 12 18:58:58.922579 systemd[1]: Started session-17.scope. Apr 12 18:58:59.191169 sshd[3488]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:59.194853 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:49772.service. Apr 12 18:58:59.197198 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:55264.service: Deactivated successfully. Apr 12 18:58:59.197798 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:58:59.199964 systemd-logind[1112]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:58:59.200957 systemd-logind[1112]: Removed session 17. Apr 12 18:58:59.230804 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 49772 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:58:59.232095 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:59.235335 systemd-logind[1112]: New session 18 of user core. Apr 12 18:58:59.236076 systemd[1]: Started session-18.scope. Apr 12 18:58:59.337452 sshd[3499]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:59.339655 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:49772.service: Deactivated successfully. Apr 12 18:58:59.340411 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:58:59.341068 systemd-logind[1112]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:58:59.341731 systemd-logind[1112]: Removed session 18. Apr 12 18:59:04.341942 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:49786.service. Apr 12 18:59:04.375909 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 49786 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:04.377040 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:04.380959 systemd-logind[1112]: New session 19 of user core. Apr 12 18:59:04.381879 systemd[1]: Started session-19.scope. Apr 12 18:59:04.481765 sshd[3513]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:04.484684 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:49786.service: Deactivated successfully. Apr 12 18:59:04.485482 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:59:04.485963 systemd-logind[1112]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:59:04.486640 systemd-logind[1112]: Removed session 19. Apr 12 18:59:09.486149 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:42718.service. Apr 12 18:59:09.521246 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:09.522369 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:09.525595 systemd-logind[1112]: New session 20 of user core. Apr 12 18:59:09.526362 systemd[1]: Started session-20.scope. Apr 12 18:59:09.640323 sshd[3530]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:09.642421 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:42718.service: Deactivated successfully. Apr 12 18:59:09.643177 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:59:09.643833 systemd-logind[1112]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:59:09.644680 systemd-logind[1112]: Removed session 20. Apr 12 18:59:14.644420 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:42720.service. Apr 12 18:59:14.678327 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 42720 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:14.679291 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:14.681965 systemd-logind[1112]: New session 21 of user core. Apr 12 18:59:14.682670 systemd[1]: Started session-21.scope. Apr 12 18:59:14.778968 sshd[3546]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:14.781039 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:42720.service: Deactivated successfully. Apr 12 18:59:14.781715 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:59:14.782220 systemd-logind[1112]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:59:14.782802 systemd-logind[1112]: Removed session 21. Apr 12 18:59:19.358272 kubelet[1961]: E0412 18:59:19.358242 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:19.783403 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:45136.service. Apr 12 18:59:19.816949 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 45136 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:19.817735 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:19.820698 systemd-logind[1112]: New session 22 of user core. Apr 12 18:59:19.821539 systemd[1]: Started session-22.scope. Apr 12 18:59:19.918579 sshd[3559]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:19.921155 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:45136.service: Deactivated successfully. Apr 12 18:59:19.921814 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:59:19.922367 systemd-logind[1112]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:59:19.923433 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:45142.service. Apr 12 18:59:19.924213 systemd-logind[1112]: Removed session 22. Apr 12 18:59:19.956805 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 45142 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:19.957939 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:19.961131 systemd-logind[1112]: New session 23 of user core. Apr 12 18:59:19.961981 systemd[1]: Started session-23.scope. Apr 12 18:59:21.263172 kubelet[1961]: I0412 18:59:21.263137 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-xwhmg" podStartSLOduration=69.263068902 podCreationTimestamp="2024-04-12 18:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:58:41.473651313 +0000 UTC m=+43.195537986" watchObservedRunningTime="2024-04-12 18:59:21.263068902 +0000 UTC m=+82.984955555" Apr 12 18:59:21.267975 env[1126]: time="2024-04-12T18:59:21.267931538Z" level=info msg="StopContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" with timeout 30 (s)" Apr 12 18:59:21.268349 env[1126]: time="2024-04-12T18:59:21.268274463Z" level=info msg="Stop container \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" with signal terminated" Apr 12 18:59:21.278937 systemd[1]: cri-containerd-6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec.scope: Deactivated successfully. Apr 12 18:59:21.289330 env[1126]: time="2024-04-12T18:59:21.289133297Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:59:21.294170 env[1126]: time="2024-04-12T18:59:21.294140769Z" level=info msg="StopContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" with timeout 1 (s)" Apr 12 18:59:21.294525 env[1126]: time="2024-04-12T18:59:21.294510685Z" level=info msg="Stop container \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" with signal terminated" Apr 12 18:59:21.297160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec-rootfs.mount: Deactivated successfully. Apr 12 18:59:21.300622 systemd-networkd[1021]: lxc_health: Link DOWN Apr 12 18:59:21.300631 systemd-networkd[1021]: lxc_health: Lost carrier Apr 12 18:59:21.308788 env[1126]: time="2024-04-12T18:59:21.308746130Z" level=info msg="shim disconnected" id=6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec Apr 12 18:59:21.308887 env[1126]: time="2024-04-12T18:59:21.308793891Z" level=warning msg="cleaning up after shim disconnected" id=6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec namespace=k8s.io Apr 12 18:59:21.308887 env[1126]: time="2024-04-12T18:59:21.308811515Z" level=info msg="cleaning up dead shim" Apr 12 18:59:21.319686 env[1126]: time="2024-04-12T18:59:21.319630997Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3626 runtime=io.containerd.runc.v2\n" Apr 12 18:59:21.322495 env[1126]: time="2024-04-12T18:59:21.322452936Z" level=info msg="StopContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" returns successfully" Apr 12 18:59:21.323067 env[1126]: time="2024-04-12T18:59:21.323042982Z" level=info msg="StopPodSandbox for \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\"" Apr 12 18:59:21.323137 env[1126]: time="2024-04-12T18:59:21.323118427Z" level=info msg="Container to stop \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.324573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da-shm.mount: Deactivated successfully. Apr 12 18:59:21.330108 systemd[1]: cri-containerd-d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da.scope: Deactivated successfully. Apr 12 18:59:21.334735 systemd[1]: cri-containerd-1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75.scope: Deactivated successfully. Apr 12 18:59:21.334911 systemd[1]: cri-containerd-1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75.scope: Consumed 5.795s CPU time. Apr 12 18:59:21.347861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75-rootfs.mount: Deactivated successfully. Apr 12 18:59:21.351248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da-rootfs.mount: Deactivated successfully. Apr 12 18:59:21.354110 env[1126]: time="2024-04-12T18:59:21.354066457Z" level=info msg="shim disconnected" id=1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75 Apr 12 18:59:21.354251 env[1126]: time="2024-04-12T18:59:21.354127384Z" level=warning msg="cleaning up after shim disconnected" id=1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75 namespace=k8s.io Apr 12 18:59:21.354251 env[1126]: time="2024-04-12T18:59:21.354143424Z" level=info msg="cleaning up dead shim" Apr 12 18:59:21.354788 env[1126]: time="2024-04-12T18:59:21.354763187Z" level=info msg="shim disconnected" id=d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da Apr 12 18:59:21.354860 env[1126]: time="2024-04-12T18:59:21.354788857Z" level=warning msg="cleaning up after shim disconnected" id=d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da namespace=k8s.io Apr 12 18:59:21.354860 env[1126]: time="2024-04-12T18:59:21.354796190Z" level=info msg="cleaning up dead shim" Apr 12 18:59:21.360173 env[1126]: time="2024-04-12T18:59:21.360123443Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3673 runtime=io.containerd.runc.v2\n" Apr 12 18:59:21.360370 env[1126]: time="2024-04-12T18:59:21.360337072Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\n" Apr 12 18:59:21.360571 env[1126]: time="2024-04-12T18:59:21.360543697Z" level=info msg="TearDown network for sandbox \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\" successfully" Apr 12 18:59:21.360621 env[1126]: time="2024-04-12T18:59:21.360571770Z" level=info msg="StopPodSandbox for \"d048bf47cdff66b9edc8b7e00f1b053bc8fd4b328904add7762a1c420221b8da\" returns successfully" Apr 12 18:59:21.363151 env[1126]: time="2024-04-12T18:59:21.363126389Z" level=info msg="StopContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" returns successfully" Apr 12 18:59:21.363472 env[1126]: time="2024-04-12T18:59:21.363453313Z" level=info msg="StopPodSandbox for \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\"" Apr 12 18:59:21.363532 env[1126]: time="2024-04-12T18:59:21.363501905Z" level=info msg="Container to stop \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.363532 env[1126]: time="2024-04-12T18:59:21.363514810Z" level=info msg="Container to stop \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.363532 env[1126]: time="2024-04-12T18:59:21.363524589Z" level=info msg="Container to stop \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.363669 env[1126]: time="2024-04-12T18:59:21.363532984Z" level=info msg="Container to stop \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.363669 env[1126]: time="2024-04-12T18:59:21.363541081Z" level=info msg="Container to stop \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:21.367929 systemd[1]: cri-containerd-1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956.scope: Deactivated successfully. Apr 12 18:59:21.391136 env[1126]: time="2024-04-12T18:59:21.391078377Z" level=info msg="shim disconnected" id=1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956 Apr 12 18:59:21.391136 env[1126]: time="2024-04-12T18:59:21.391119055Z" level=warning msg="cleaning up after shim disconnected" id=1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956 namespace=k8s.io Apr 12 18:59:21.391136 env[1126]: time="2024-04-12T18:59:21.391127000Z" level=info msg="cleaning up dead shim" Apr 12 18:59:21.397481 env[1126]: time="2024-04-12T18:59:21.397435768Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3713 runtime=io.containerd.runc.v2\n" Apr 12 18:59:21.397769 env[1126]: time="2024-04-12T18:59:21.397741632Z" level=info msg="TearDown network for sandbox \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" successfully" Apr 12 18:59:21.397806 env[1126]: time="2024-04-12T18:59:21.397771339Z" level=info msg="StopPodSandbox for \"1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956\" returns successfully" Apr 12 18:59:21.481170 kubelet[1961]: I0412 18:59:21.481133 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a0d6105-f6ba-407d-8a45-f80f3006a912-cilium-config-path\") pod \"8a0d6105-f6ba-407d-8a45-f80f3006a912\" (UID: \"8a0d6105-f6ba-407d-8a45-f80f3006a912\") " Apr 12 18:59:21.481170 kubelet[1961]: I0412 18:59:21.481174 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-cgroup\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481196 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cni-path\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481217 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-xtables-lock\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481243 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjs2v\" (UniqueName: \"kubernetes.io/projected/8a0d6105-f6ba-407d-8a45-f80f3006a912-kube-api-access-gjs2v\") pod \"8a0d6105-f6ba-407d-8a45-f80f3006a912\" (UID: \"8a0d6105-f6ba-407d-8a45-f80f3006a912\") " Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481264 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-run\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481254 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.481365 kubelet[1961]: I0412 18:59:21.481291 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hubble-tls\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481529 kubelet[1961]: I0412 18:59:21.481344 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-kernel\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481529 kubelet[1961]: I0412 18:59:21.481372 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-bpf-maps\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481529 kubelet[1961]: W0412 18:59:21.481356 1961 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8a0d6105-f6ba-407d-8a45-f80f3006a912/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:59:21.481529 kubelet[1961]: I0412 18:59:21.481432 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-config-path\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481529 kubelet[1961]: I0412 18:59:21.481460 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-clustermesh-secrets\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.481529 kubelet[1961]: I0412 18:59:21.481502 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.481740 kubelet[1961]: I0412 18:59:21.481695 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.481740 kubelet[1961]: I0412 18:59:21.481711 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.481740 kubelet[1961]: I0412 18:59:21.481695 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.481740 kubelet[1961]: I0412 18:59:21.481725 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.481912 kubelet[1961]: W0412 18:59:21.481851 1961 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:59:21.481912 kubelet[1961]: I0412 18:59:21.481869 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.482964 kubelet[1961]: I0412 18:59:21.482946 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0d6105-f6ba-407d-8a45-f80f3006a912-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a0d6105-f6ba-407d-8a45-f80f3006a912" (UID: "8a0d6105-f6ba-407d-8a45-f80f3006a912"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:59:21.483911 kubelet[1961]: I0412 18:59:21.483890 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:59:21.484099 kubelet[1961]: I0412 18:59:21.484074 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:59:21.484157 kubelet[1961]: I0412 18:59:21.484134 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:59:21.484856 kubelet[1961]: I0412 18:59:21.484830 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0d6105-f6ba-407d-8a45-f80f3006a912-kube-api-access-gjs2v" (OuterVolumeSpecName: "kube-api-access-gjs2v") pod "8a0d6105-f6ba-407d-8a45-f80f3006a912" (UID: "8a0d6105-f6ba-407d-8a45-f80f3006a912"). InnerVolumeSpecName "kube-api-access-gjs2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:59:21.514472 kubelet[1961]: I0412 18:59:21.514356 1961 scope.go:115] "RemoveContainer" containerID="1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75" Apr 12 18:59:21.515695 env[1126]: time="2024-04-12T18:59:21.515669195Z" level=info msg="RemoveContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\"" Apr 12 18:59:21.519599 systemd[1]: Removed slice kubepods-besteffort-pod8a0d6105_f6ba_407d_8a45_f80f3006a912.slice. Apr 12 18:59:21.520947 env[1126]: time="2024-04-12T18:59:21.520915583Z" level=info msg="RemoveContainer for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" returns successfully" Apr 12 18:59:21.521184 kubelet[1961]: I0412 18:59:21.521163 1961 scope.go:115] "RemoveContainer" containerID="8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738" Apr 12 18:59:21.522038 env[1126]: time="2024-04-12T18:59:21.522016807Z" level=info msg="RemoveContainer for \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\"" Apr 12 18:59:21.524651 env[1126]: time="2024-04-12T18:59:21.524618765Z" level=info msg="RemoveContainer for \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\" returns successfully" Apr 12 18:59:21.524845 kubelet[1961]: I0412 18:59:21.524825 1961 scope.go:115] "RemoveContainer" containerID="fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6" Apr 12 18:59:21.526111 env[1126]: time="2024-04-12T18:59:21.526084614Z" level=info msg="RemoveContainer for \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\"" Apr 12 18:59:21.528562 env[1126]: time="2024-04-12T18:59:21.528541617Z" level=info msg="RemoveContainer for \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\" returns successfully" Apr 12 18:59:21.528699 kubelet[1961]: I0412 18:59:21.528685 1961 scope.go:115] "RemoveContainer" containerID="13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f" Apr 12 18:59:21.529349 env[1126]: time="2024-04-12T18:59:21.529326886Z" level=info msg="RemoveContainer for \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\"" Apr 12 18:59:21.531751 env[1126]: time="2024-04-12T18:59:21.531733111Z" level=info msg="RemoveContainer for \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\" returns successfully" Apr 12 18:59:21.531851 kubelet[1961]: I0412 18:59:21.531835 1961 scope.go:115] "RemoveContainer" containerID="a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e" Apr 12 18:59:21.532505 env[1126]: time="2024-04-12T18:59:21.532484677Z" level=info msg="RemoveContainer for \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\"" Apr 12 18:59:21.535273 env[1126]: time="2024-04-12T18:59:21.535243505Z" level=info msg="RemoveContainer for \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\" returns successfully" Apr 12 18:59:21.535385 kubelet[1961]: I0412 18:59:21.535366 1961 scope.go:115] "RemoveContainer" containerID="1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75" Apr 12 18:59:21.535617 env[1126]: time="2024-04-12T18:59:21.535538589Z" level=error msg="ContainerStatus for \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\": not found" Apr 12 18:59:21.535747 kubelet[1961]: E0412 18:59:21.535734 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\": not found" containerID="1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75" Apr 12 18:59:21.535844 kubelet[1961]: I0412 18:59:21.535828 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75} err="failed to get container status \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"1343e60f2d54dd1cd2b34473bb1af4a4a8098186556e20a4d366a0bec3e26a75\": not found" Apr 12 18:59:21.535844 kubelet[1961]: I0412 18:59:21.535845 1961 scope.go:115] "RemoveContainer" containerID="8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738" Apr 12 18:59:21.536020 env[1126]: time="2024-04-12T18:59:21.535969691Z" level=error msg="ContainerStatus for \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\": not found" Apr 12 18:59:21.536124 kubelet[1961]: E0412 18:59:21.536100 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\": not found" containerID="8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738" Apr 12 18:59:21.536167 kubelet[1961]: I0412 18:59:21.536144 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738} err="failed to get container status \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\": rpc error: code = NotFound desc = an error occurred when try to find container \"8af8e3b0ffe798c607bb8e3ea7a0037b8b227bfa5bdf3ee08ab2a8e4879f3738\": not found" Apr 12 18:59:21.536167 kubelet[1961]: I0412 18:59:21.536154 1961 scope.go:115] "RemoveContainer" containerID="fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6" Apr 12 18:59:21.536314 env[1126]: time="2024-04-12T18:59:21.536271919Z" level=error msg="ContainerStatus for \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\": not found" Apr 12 18:59:21.536420 kubelet[1961]: E0412 18:59:21.536406 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\": not found" containerID="fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6" Apr 12 18:59:21.536454 kubelet[1961]: I0412 18:59:21.536436 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6} err="failed to get container status \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"fca798557cf820637dc634ff1ffa4f91ee6d7a28e33ace74e97c12601f2920a6\": not found" Apr 12 18:59:21.536454 kubelet[1961]: I0412 18:59:21.536446 1961 scope.go:115] "RemoveContainer" containerID="13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f" Apr 12 18:59:21.536628 env[1126]: time="2024-04-12T18:59:21.536580658Z" level=error msg="ContainerStatus for \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\": not found" Apr 12 18:59:21.536736 kubelet[1961]: E0412 18:59:21.536712 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\": not found" containerID="13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f" Apr 12 18:59:21.536784 kubelet[1961]: I0412 18:59:21.536758 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f} err="failed to get container status \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\": rpc error: code = NotFound desc = an error occurred when try to find container \"13f5d2611f6551a76986fdc7ad0f47e681ee06afcb83bd993b0c06ee1983e65f\": not found" Apr 12 18:59:21.536784 kubelet[1961]: I0412 18:59:21.536770 1961 scope.go:115] "RemoveContainer" containerID="a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e" Apr 12 18:59:21.536951 env[1126]: time="2024-04-12T18:59:21.536912301Z" level=error msg="ContainerStatus for \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\": not found" Apr 12 18:59:21.537030 kubelet[1961]: E0412 18:59:21.537019 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\": not found" containerID="a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e" Apr 12 18:59:21.537058 kubelet[1961]: I0412 18:59:21.537045 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e} err="failed to get container status \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a12bca2d21a05f2eb2bcf2f35b9fe615aabc726c932a3350b3d5e3f16ff1fa1e\": not found" Apr 12 18:59:21.537058 kubelet[1961]: I0412 18:59:21.537052 1961 scope.go:115] "RemoveContainer" containerID="6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec" Apr 12 18:59:21.537840 env[1126]: time="2024-04-12T18:59:21.537820446Z" level=info msg="RemoveContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\"" Apr 12 18:59:21.540711 env[1126]: time="2024-04-12T18:59:21.540687341Z" level=info msg="RemoveContainer for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" returns successfully" Apr 12 18:59:21.540829 kubelet[1961]: I0412 18:59:21.540811 1961 scope.go:115] "RemoveContainer" containerID="6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec" Apr 12 18:59:21.541101 env[1126]: time="2024-04-12T18:59:21.541054923Z" level=error msg="ContainerStatus for \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\": not found" Apr 12 18:59:21.541211 kubelet[1961]: E0412 18:59:21.541195 1961 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\": not found" containerID="6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec" Apr 12 18:59:21.541296 kubelet[1961]: I0412 18:59:21.541234 1961 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec} err="failed to get container status \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c2f41e6784eb9d8db15aea9cbcc9b42b311bcfb359e1a163d337c7f4f0565ec\": not found" Apr 12 18:59:21.582566 kubelet[1961]: I0412 18:59:21.582532 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hostproc\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.582566 kubelet[1961]: I0412 18:59:21.582558 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-etc-cni-netd\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582581 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcxf\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-kube-api-access-mlcxf\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582600 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-net\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582614 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-lib-modules\") pod \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\" (UID: \"3ed6edd5-9a19-4a45-b47e-2d30e1511fe5\") " Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582630 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582660 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.582704 kubelet[1961]: I0412 18:59:21.582647 1961 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582679 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582675 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582687 1961 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582697 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a0d6105-f6ba-407d-8a45-f80f3006a912-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582697 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582706 1961 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gjs2v\" (UniqueName: \"kubernetes.io/projected/8a0d6105-f6ba-407d-8a45-f80f3006a912-kube-api-access-gjs2v\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.582882 kubelet[1961]: I0412 18:59:21.582714 1961 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.583083 kubelet[1961]: I0412 18:59:21.582722 1961 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.583083 kubelet[1961]: I0412 18:59:21.582729 1961 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.583083 kubelet[1961]: I0412 18:59:21.582737 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.583083 kubelet[1961]: I0412 18:59:21.582744 1961 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.584783 kubelet[1961]: I0412 18:59:21.584765 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-kube-api-access-mlcxf" (OuterVolumeSpecName: "kube-api-access-mlcxf") pod "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" (UID: "3ed6edd5-9a19-4a45-b47e-2d30e1511fe5"). InnerVolumeSpecName "kube-api-access-mlcxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:59:21.683029 kubelet[1961]: I0412 18:59:21.683003 1961 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.683029 kubelet[1961]: I0412 18:59:21.683025 1961 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.683119 kubelet[1961]: I0412 18:59:21.683037 1961 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mlcxf\" (UniqueName: \"kubernetes.io/projected/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-kube-api-access-mlcxf\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.683119 kubelet[1961]: I0412 18:59:21.683044 1961 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.683119 kubelet[1961]: I0412 18:59:21.683052 1961 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:21.817633 systemd[1]: Removed slice kubepods-burstable-pod3ed6edd5_9a19_4a45_b47e_2d30e1511fe5.slice. Apr 12 18:59:21.817710 systemd[1]: kubepods-burstable-pod3ed6edd5_9a19_4a45_b47e_2d30e1511fe5.slice: Consumed 5.882s CPU time. Apr 12 18:59:22.275548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956-rootfs.mount: Deactivated successfully. Apr 12 18:59:22.275655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f8c8a633fd54379da0ce7a41c8af8277bb6e32fa595d16ac5cf017a1cea1956-shm.mount: Deactivated successfully. Apr 12 18:59:22.275748 systemd[1]: var-lib-kubelet-pods-8a0d6105\x2df6ba\x2d407d\x2d8a45\x2df80f3006a912-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjs2v.mount: Deactivated successfully. Apr 12 18:59:22.275825 systemd[1]: var-lib-kubelet-pods-3ed6edd5\x2d9a19\x2d4a45\x2db47e\x2d2d30e1511fe5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlcxf.mount: Deactivated successfully. Apr 12 18:59:22.275903 systemd[1]: var-lib-kubelet-pods-3ed6edd5\x2d9a19\x2d4a45\x2db47e\x2d2d30e1511fe5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:59:22.275975 systemd[1]: var-lib-kubelet-pods-3ed6edd5\x2d9a19\x2d4a45\x2db47e\x2d2d30e1511fe5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:59:22.360301 kubelet[1961]: I0412 18:59:22.360276 1961 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3ed6edd5-9a19-4a45-b47e-2d30e1511fe5 path="/var/lib/kubelet/pods/3ed6edd5-9a19-4a45-b47e-2d30e1511fe5/volumes" Apr 12 18:59:22.360788 kubelet[1961]: I0412 18:59:22.360771 1961 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8a0d6105-f6ba-407d-8a45-f80f3006a912 path="/var/lib/kubelet/pods/8a0d6105-f6ba-407d-8a45-f80f3006a912/volumes" Apr 12 18:59:23.248717 sshd[3572]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:23.251152 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:45142.service: Deactivated successfully. Apr 12 18:59:23.251719 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:59:23.252245 systemd-logind[1112]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:59:23.253234 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:45144.service. Apr 12 18:59:23.254037 systemd-logind[1112]: Removed session 23. Apr 12 18:59:23.287684 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 45144 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:23.288485 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:23.291423 systemd-logind[1112]: New session 24 of user core. Apr 12 18:59:23.292084 systemd[1]: Started session-24.scope. Apr 12 18:59:23.403891 kubelet[1961]: E0412 18:59:23.403861 1961 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:59:23.851451 sshd[3731]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:23.853621 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:45160.service. Apr 12 18:59:23.856137 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:45144.service: Deactivated successfully. Apr 12 18:59:23.856808 systemd-logind[1112]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:59:23.856963 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:59:23.858673 systemd-logind[1112]: Removed session 24. Apr 12 18:59:23.862145 kubelet[1961]: I0412 18:59:23.862107 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862155 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="apply-sysctl-overwrites" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862163 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="clean-cilium-state" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862169 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="cilium-agent" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862175 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="mount-cgroup" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862181 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="mount-bpf-fs" Apr 12 18:59:23.862252 kubelet[1961]: E0412 18:59:23.862188 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a0d6105-f6ba-407d-8a45-f80f3006a912" containerName="cilium-operator" Apr 12 18:59:23.862252 kubelet[1961]: I0412 18:59:23.862208 1961 memory_manager.go:346] "RemoveStaleState removing state" podUID="3ed6edd5-9a19-4a45-b47e-2d30e1511fe5" containerName="cilium-agent" Apr 12 18:59:23.862252 kubelet[1961]: I0412 18:59:23.862213 1961 memory_manager.go:346] "RemoveStaleState removing state" podUID="8a0d6105-f6ba-407d-8a45-f80f3006a912" containerName="cilium-operator" Apr 12 18:59:23.867916 systemd[1]: Created slice kubepods-burstable-podb0a3723f_3f42_4b35_87d6_2f7b185fec22.slice. Apr 12 18:59:23.894111 kubelet[1961]: I0412 18:59:23.894075 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-run\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894240 kubelet[1961]: I0412 18:59:23.894124 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-lib-modules\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894240 kubelet[1961]: I0412 18:59:23.894150 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-bpf-maps\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894240 kubelet[1961]: I0412 18:59:23.894173 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-config-path\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894240 kubelet[1961]: I0412 18:59:23.894196 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-net\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894240 kubelet[1961]: I0412 18:59:23.894223 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-kernel\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894263 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-etc-cni-netd\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894294 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-cgroup\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894324 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cni-path\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894357 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-ipsec-secrets\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894381 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-xtables-lock\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894480 kubelet[1961]: I0412 18:59:23.894421 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hostproc\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894611 kubelet[1961]: I0412 18:59:23.894458 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-clustermesh-secrets\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.894611 kubelet[1961]: I0412 18:59:23.894487 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hubble-tls\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:23.901038 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 45160 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:23.901381 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:23.904735 systemd-logind[1112]: New session 25 of user core. Apr 12 18:59:23.905079 systemd[1]: Started session-25.scope. Apr 12 18:59:23.994966 kubelet[1961]: I0412 18:59:23.994933 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ph7\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-kube-api-access-t9ph7\") pod \"cilium-khv99\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " pod="kube-system/cilium-khv99" Apr 12 18:59:24.013958 sshd[3742]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:24.017798 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:45162.service. Apr 12 18:59:24.021812 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:45160.service: Deactivated successfully. Apr 12 18:59:24.022650 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:59:24.025462 systemd-logind[1112]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:59:24.026308 systemd-logind[1112]: Removed session 25. Apr 12 18:59:24.055698 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 45162 ssh2: RSA SHA256:YcqR9Dqo/1Ybntt1aIORABiFzXA47j16nwHTSfCmLBw Apr 12 18:59:24.056625 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:59:24.059372 systemd-logind[1112]: New session 26 of user core. Apr 12 18:59:24.060100 systemd[1]: Started session-26.scope. Apr 12 18:59:24.170761 kubelet[1961]: E0412 18:59:24.170647 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:24.171765 env[1126]: time="2024-04-12T18:59:24.171241110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khv99,Uid:b0a3723f-3f42-4b35-87d6-2f7b185fec22,Namespace:kube-system,Attempt:0,}" Apr 12 18:59:24.184458 env[1126]: time="2024-04-12T18:59:24.183690844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:59:24.184458 env[1126]: time="2024-04-12T18:59:24.183727113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:59:24.184458 env[1126]: time="2024-04-12T18:59:24.183736100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:59:24.184458 env[1126]: time="2024-04-12T18:59:24.183823467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b pid=3780 runtime=io.containerd.runc.v2 Apr 12 18:59:24.195305 systemd[1]: Started cri-containerd-851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b.scope. Apr 12 18:59:24.213834 env[1126]: time="2024-04-12T18:59:24.213795810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khv99,Uid:b0a3723f-3f42-4b35-87d6-2f7b185fec22,Namespace:kube-system,Attempt:0,} returns sandbox id \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\"" Apr 12 18:59:24.214402 kubelet[1961]: E0412 18:59:24.214371 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:24.216148 env[1126]: time="2024-04-12T18:59:24.216117628Z" level=info msg="CreateContainer within sandbox \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:59:24.227205 env[1126]: time="2024-04-12T18:59:24.227166490Z" level=info msg="CreateContainer within sandbox \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\"" Apr 12 18:59:24.227570 env[1126]: time="2024-04-12T18:59:24.227511699Z" level=info msg="StartContainer for \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\"" Apr 12 18:59:24.238939 systemd[1]: Started cri-containerd-ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb.scope. Apr 12 18:59:24.249461 systemd[1]: cri-containerd-ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb.scope: Deactivated successfully. Apr 12 18:59:24.249698 systemd[1]: Stopped cri-containerd-ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb.scope. Apr 12 18:59:24.272076 env[1126]: time="2024-04-12T18:59:24.272032228Z" level=info msg="shim disconnected" id=ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb Apr 12 18:59:24.272183 env[1126]: time="2024-04-12T18:59:24.272076602Z" level=warning msg="cleaning up after shim disconnected" id=ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb namespace=k8s.io Apr 12 18:59:24.272183 env[1126]: time="2024-04-12T18:59:24.272084687Z" level=info msg="cleaning up dead shim" Apr 12 18:59:24.278231 env[1126]: time="2024-04-12T18:59:24.278192777Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:59:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:59:24.278489 env[1126]: time="2024-04-12T18:59:24.278406635Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Apr 12 18:59:24.279253 env[1126]: time="2024-04-12T18:59:24.279215367Z" level=error msg="Failed to pipe stderr of container \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\"" error="reading from a closed fifo" Apr 12 18:59:24.279253 env[1126]: time="2024-04-12T18:59:24.279232360Z" level=error msg="Failed to pipe stdout of container \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\"" error="reading from a closed fifo" Apr 12 18:59:24.281781 env[1126]: time="2024-04-12T18:59:24.281743449Z" level=error msg="StartContainer for \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:59:24.282017 kubelet[1961]: E0412 18:59:24.281990 1961 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb" Apr 12 18:59:24.282136 kubelet[1961]: E0412 18:59:24.282122 1961 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:59:24.282136 kubelet[1961]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:59:24.282136 kubelet[1961]: rm /hostbin/cilium-mount Apr 12 18:59:24.282210 kubelet[1961]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-t9ph7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-khv99_kube-system(b0a3723f-3f42-4b35-87d6-2f7b185fec22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:59:24.282210 kubelet[1961]: E0412 18:59:24.282167 1961 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-khv99" podUID=b0a3723f-3f42-4b35-87d6-2f7b185fec22 Apr 12 18:59:24.521597 env[1126]: time="2024-04-12T18:59:24.521561356Z" level=info msg="StopPodSandbox for \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\"" Apr 12 18:59:24.521710 env[1126]: time="2024-04-12T18:59:24.521613195Z" level=info msg="Container to stop \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:59:24.526269 systemd[1]: cri-containerd-851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b.scope: Deactivated successfully. Apr 12 18:59:24.576351 env[1126]: time="2024-04-12T18:59:24.576302739Z" level=info msg="shim disconnected" id=851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b Apr 12 18:59:24.576578 env[1126]: time="2024-04-12T18:59:24.576355479Z" level=warning msg="cleaning up after shim disconnected" id=851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b namespace=k8s.io Apr 12 18:59:24.576578 env[1126]: time="2024-04-12T18:59:24.576365358Z" level=info msg="cleaning up dead shim" Apr 12 18:59:24.581968 env[1126]: time="2024-04-12T18:59:24.581945630Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\n" Apr 12 18:59:24.582224 env[1126]: time="2024-04-12T18:59:24.582194414Z" level=info msg="TearDown network for sandbox \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\" successfully" Apr 12 18:59:24.582224 env[1126]: time="2024-04-12T18:59:24.582215725Z" level=info msg="StopPodSandbox for \"851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b\" returns successfully" Apr 12 18:59:24.698334 kubelet[1961]: I0412 18:59:24.698307 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hostproc\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698346 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-ipsec-secrets\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698363 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-xtables-lock\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698381 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hubble-tls\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698413 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698437 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-lib-modules\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698453 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698471 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-bpf-maps\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698512 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-clustermesh-secrets\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698545 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-cgroup\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698579 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-run\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698625 kubelet[1961]: I0412 18:59:24.698610 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-config-path\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698638 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-kernel\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698685 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-etc-cni-netd\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698700 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698716 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9ph7\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-kube-api-access-t9ph7\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698719 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698733 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698745 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-net\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698775 1961 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cni-path\") pod \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\" (UID: \"b0a3723f-3f42-4b35-87d6-2f7b185fec22\") " Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698814 1961 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698832 1961 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698848 1961 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698865 1961 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.698892 kubelet[1961]: I0412 18:59:24.698880 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.699186 kubelet[1961]: I0412 18:59:24.698913 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.699186 kubelet[1961]: I0412 18:59:24.698944 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.699186 kubelet[1961]: W0412 18:59:24.699141 1961 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b0a3723f-3f42-4b35-87d6-2f7b185fec22/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:59:24.699527 kubelet[1961]: I0412 18:59:24.699285 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.699527 kubelet[1961]: I0412 18:59:24.699310 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.699527 kubelet[1961]: I0412 18:59:24.699496 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:59:24.700872 kubelet[1961]: I0412 18:59:24.700850 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:59:24.700989 kubelet[1961]: I0412 18:59:24.700972 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:59:24.701412 kubelet[1961]: I0412 18:59:24.701361 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:59:24.701977 kubelet[1961]: I0412 18:59:24.701951 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:59:24.702601 kubelet[1961]: I0412 18:59:24.702572 1961 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-kube-api-access-t9ph7" (OuterVolumeSpecName: "kube-api-access-t9ph7") pod "b0a3723f-3f42-4b35-87d6-2f7b185fec22" (UID: "b0a3723f-3f42-4b35-87d6-2f7b185fec22"). InnerVolumeSpecName "kube-api-access-t9ph7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799916 1961 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799940 1961 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799951 1961 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799960 1961 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t9ph7\" (UniqueName: \"kubernetes.io/projected/b0a3723f-3f42-4b35-87d6-2f7b185fec22-kube-api-access-t9ph7\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799970 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.799978 kubelet[1961]: I0412 18:59:24.799978 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.800137 kubelet[1961]: I0412 18:59:24.799989 1961 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.800137 kubelet[1961]: I0412 18:59:24.799998 1961 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.800137 kubelet[1961]: I0412 18:59:24.800009 1961 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.800137 kubelet[1961]: I0412 18:59:24.800017 1961 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0a3723f-3f42-4b35-87d6-2f7b185fec22-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Apr 12 18:59:24.998592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b-rootfs.mount: Deactivated successfully. Apr 12 18:59:24.998679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-851a2107a793079307e03f3945ef9763184c0d771732c119abb84665888db40b-shm.mount: Deactivated successfully. Apr 12 18:59:24.998741 systemd[1]: var-lib-kubelet-pods-b0a3723f\x2d3f42\x2d4b35\x2d87d6\x2d2f7b185fec22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt9ph7.mount: Deactivated successfully. Apr 12 18:59:24.998788 systemd[1]: var-lib-kubelet-pods-b0a3723f\x2d3f42\x2d4b35\x2d87d6\x2d2f7b185fec22-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:59:24.998833 systemd[1]: var-lib-kubelet-pods-b0a3723f\x2d3f42\x2d4b35\x2d87d6\x2d2f7b185fec22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:59:24.998878 systemd[1]: var-lib-kubelet-pods-b0a3723f\x2d3f42\x2d4b35\x2d87d6\x2d2f7b185fec22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:59:25.524481 kubelet[1961]: I0412 18:59:25.524461 1961 scope.go:115] "RemoveContainer" containerID="ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb" Apr 12 18:59:25.526025 env[1126]: time="2024-04-12T18:59:25.525980684Z" level=info msg="RemoveContainer for \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\"" Apr 12 18:59:25.527778 systemd[1]: Removed slice kubepods-burstable-podb0a3723f_3f42_4b35_87d6_2f7b185fec22.slice. Apr 12 18:59:25.529378 env[1126]: time="2024-04-12T18:59:25.529341851Z" level=info msg="RemoveContainer for \"ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb\" returns successfully" Apr 12 18:59:25.548734 kubelet[1961]: I0412 18:59:25.548695 1961 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:59:25.548904 kubelet[1961]: E0412 18:59:25.548755 1961 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0a3723f-3f42-4b35-87d6-2f7b185fec22" containerName="mount-cgroup" Apr 12 18:59:25.548904 kubelet[1961]: I0412 18:59:25.548781 1961 memory_manager.go:346] "RemoveStaleState removing state" podUID="b0a3723f-3f42-4b35-87d6-2f7b185fec22" containerName="mount-cgroup" Apr 12 18:59:25.555623 systemd[1]: Created slice kubepods-burstable-pod0fbfb08f_e7ff_4507_a7a6_9377dba900e0.slice. Apr 12 18:59:25.705267 kubelet[1961]: I0412 18:59:25.705232 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-lib-modules\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705267 kubelet[1961]: I0412 18:59:25.705273 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-cilium-config-path\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705267 kubelet[1961]: I0412 18:59:25.705290 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-cilium-run\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705307 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-host-proc-sys-kernel\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705324 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-etc-cni-netd\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705417 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-cilium-ipsec-secrets\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705456 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdqr\" (UniqueName: \"kubernetes.io/projected/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-kube-api-access-tpdqr\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705521 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-cilium-cgroup\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705561 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-clustermesh-secrets\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705587 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-bpf-maps\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705605 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-xtables-lock\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705626 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-hostproc\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705642 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-cni-path\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705658 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-hubble-tls\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.705692 kubelet[1961]: I0412 18:59:25.705686 1961 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fbfb08f-e7ff-4507-a7a6-9377dba900e0-host-proc-sys-net\") pod \"cilium-w25wx\" (UID: \"0fbfb08f-e7ff-4507-a7a6-9377dba900e0\") " pod="kube-system/cilium-w25wx" Apr 12 18:59:25.858216 kubelet[1961]: E0412 18:59:25.857617 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:25.858380 env[1126]: time="2024-04-12T18:59:25.858327432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w25wx,Uid:0fbfb08f-e7ff-4507-a7a6-9377dba900e0,Namespace:kube-system,Attempt:0,}" Apr 12 18:59:25.872174 env[1126]: time="2024-04-12T18:59:25.872098957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:59:25.872174 env[1126]: time="2024-04-12T18:59:25.872133502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:59:25.872174 env[1126]: time="2024-04-12T18:59:25.872143511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:59:25.872346 env[1126]: time="2024-04-12T18:59:25.872256016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc pid=3894 runtime=io.containerd.runc.v2 Apr 12 18:59:25.881216 systemd[1]: Started cri-containerd-e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc.scope. Apr 12 18:59:25.900292 env[1126]: time="2024-04-12T18:59:25.900245294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w25wx,Uid:0fbfb08f-e7ff-4507-a7a6-9377dba900e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\"" Apr 12 18:59:25.901058 kubelet[1961]: E0412 18:59:25.901037 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:25.903171 env[1126]: time="2024-04-12T18:59:25.903130285Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:59:25.914212 env[1126]: time="2024-04-12T18:59:25.914163318Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025\"" Apr 12 18:59:25.914580 env[1126]: time="2024-04-12T18:59:25.914554393Z" level=info msg="StartContainer for \"9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025\"" Apr 12 18:59:25.926598 systemd[1]: Started cri-containerd-9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025.scope. Apr 12 18:59:25.950588 env[1126]: time="2024-04-12T18:59:25.950547711Z" level=info msg="StartContainer for \"9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025\" returns successfully" Apr 12 18:59:25.954527 systemd[1]: cri-containerd-9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025.scope: Deactivated successfully. Apr 12 18:59:25.984694 env[1126]: time="2024-04-12T18:59:25.984627051Z" level=info msg="shim disconnected" id=9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025 Apr 12 18:59:25.984694 env[1126]: time="2024-04-12T18:59:25.984692475Z" level=warning msg="cleaning up after shim disconnected" id=9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025 namespace=k8s.io Apr 12 18:59:25.984694 env[1126]: time="2024-04-12T18:59:25.984701984Z" level=info msg="cleaning up dead shim" Apr 12 18:59:25.990833 env[1126]: time="2024-04-12T18:59:25.990807865Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Apr 12 18:59:26.358228 kubelet[1961]: E0412 18:59:26.358193 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:26.360018 kubelet[1961]: I0412 18:59:26.360002 1961 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b0a3723f-3f42-4b35-87d6-2f7b185fec22 path="/var/lib/kubelet/pods/b0a3723f-3f42-4b35-87d6-2f7b185fec22/volumes" Apr 12 18:59:26.527909 kubelet[1961]: E0412 18:59:26.527868 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:26.529349 env[1126]: time="2024-04-12T18:59:26.529304523Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:59:26.538434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149600064.mount: Deactivated successfully. Apr 12 18:59:26.541235 env[1126]: time="2024-04-12T18:59:26.541180836Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64\"" Apr 12 18:59:26.541641 env[1126]: time="2024-04-12T18:59:26.541614082Z" level=info msg="StartContainer for \"536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64\"" Apr 12 18:59:26.561640 systemd[1]: Started cri-containerd-536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64.scope. Apr 12 18:59:26.582054 env[1126]: time="2024-04-12T18:59:26.581236939Z" level=info msg="StartContainer for \"536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64\" returns successfully" Apr 12 18:59:26.585492 systemd[1]: cri-containerd-536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64.scope: Deactivated successfully. Apr 12 18:59:26.604447 env[1126]: time="2024-04-12T18:59:26.604385042Z" level=info msg="shim disconnected" id=536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64 Apr 12 18:59:26.604447 env[1126]: time="2024-04-12T18:59:26.604444545Z" level=warning msg="cleaning up after shim disconnected" id=536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64 namespace=k8s.io Apr 12 18:59:26.604585 env[1126]: time="2024-04-12T18:59:26.604453292Z" level=info msg="cleaning up dead shim" Apr 12 18:59:26.612169 env[1126]: time="2024-04-12T18:59:26.612071389Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4037 runtime=io.containerd.runc.v2\n" Apr 12 18:59:26.998463 systemd[1]: run-containerd-runc-k8s.io-536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64-runc.6wJTis.mount: Deactivated successfully. Apr 12 18:59:26.998563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64-rootfs.mount: Deactivated successfully. Apr 12 18:59:27.376418 kubelet[1961]: W0412 18:59:27.376324 1961 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a3723f_3f42_4b35_87d6_2f7b185fec22.slice/cri-containerd-ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb.scope WatchSource:0}: container "ee822a0341117cb0c45244cffc8cff12116af28aff8b598e4a9dcbd3d8a8b9bb" in namespace "k8s.io": not found Apr 12 18:59:27.530581 kubelet[1961]: E0412 18:59:27.530553 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:27.537213 env[1126]: time="2024-04-12T18:59:27.537173347Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:59:27.550175 env[1126]: time="2024-04-12T18:59:27.550127925Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31\"" Apr 12 18:59:27.550586 env[1126]: time="2024-04-12T18:59:27.550547645Z" level=info msg="StartContainer for \"c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31\"" Apr 12 18:59:27.570879 systemd[1]: Started cri-containerd-c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31.scope. Apr 12 18:59:27.594471 systemd[1]: cri-containerd-c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31.scope: Deactivated successfully. Apr 12 18:59:27.594701 env[1126]: time="2024-04-12T18:59:27.594658072Z" level=info msg="StartContainer for \"c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31\" returns successfully" Apr 12 18:59:27.616119 env[1126]: time="2024-04-12T18:59:27.616075390Z" level=info msg="shim disconnected" id=c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31 Apr 12 18:59:27.616119 env[1126]: time="2024-04-12T18:59:27.616118602Z" level=warning msg="cleaning up after shim disconnected" id=c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31 namespace=k8s.io Apr 12 18:59:27.616119 env[1126]: time="2024-04-12T18:59:27.616127791Z" level=info msg="cleaning up dead shim" Apr 12 18:59:27.621784 env[1126]: time="2024-04-12T18:59:27.621744951Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4094 runtime=io.containerd.runc.v2\n" Apr 12 18:59:27.998972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31-rootfs.mount: Deactivated successfully. Apr 12 18:59:28.405422 kubelet[1961]: E0412 18:59:28.405311 1961 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:59:28.533962 kubelet[1961]: E0412 18:59:28.533934 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:28.535962 env[1126]: time="2024-04-12T18:59:28.535916657Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:59:28.548033 env[1126]: time="2024-04-12T18:59:28.547992702Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933\"" Apr 12 18:59:28.548533 env[1126]: time="2024-04-12T18:59:28.548503636Z" level=info msg="StartContainer for \"28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933\"" Apr 12 18:59:28.563127 systemd[1]: Started cri-containerd-28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933.scope. Apr 12 18:59:28.592001 systemd[1]: cri-containerd-28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933.scope: Deactivated successfully. Apr 12 18:59:28.593021 env[1126]: time="2024-04-12T18:59:28.592989627Z" level=info msg="StartContainer for \"28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933\" returns successfully" Apr 12 18:59:28.610494 env[1126]: time="2024-04-12T18:59:28.610441339Z" level=info msg="shim disconnected" id=28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933 Apr 12 18:59:28.610494 env[1126]: time="2024-04-12T18:59:28.610485734Z" level=warning msg="cleaning up after shim disconnected" id=28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933 namespace=k8s.io Apr 12 18:59:28.610494 env[1126]: time="2024-04-12T18:59:28.610493889Z" level=info msg="cleaning up dead shim" Apr 12 18:59:28.616105 env[1126]: time="2024-04-12T18:59:28.616076159Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4146 runtime=io.containerd.runc.v2\n" Apr 12 18:59:28.998550 systemd[1]: run-containerd-runc-k8s.io-28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933-runc.urb8zI.mount: Deactivated successfully. Apr 12 18:59:28.998643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933-rootfs.mount: Deactivated successfully. Apr 12 18:59:29.537778 kubelet[1961]: E0412 18:59:29.537750 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:29.540896 env[1126]: time="2024-04-12T18:59:29.540856671Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:59:29.558018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548141537.mount: Deactivated successfully. Apr 12 18:59:29.559524 env[1126]: time="2024-04-12T18:59:29.559479461Z" level=info msg="CreateContainer within sandbox \"e44d9566ba11918e2af27841b4af6728f4f695ca9bc220bd900a715be4c48ddc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403\"" Apr 12 18:59:29.561527 env[1126]: time="2024-04-12T18:59:29.560300112Z" level=info msg="StartContainer for \"ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403\"" Apr 12 18:59:29.580128 systemd[1]: Started cri-containerd-ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403.scope. Apr 12 18:59:29.602192 env[1126]: time="2024-04-12T18:59:29.602154053Z" level=info msg="StartContainer for \"ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403\" returns successfully" Apr 12 18:59:29.820422 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:59:29.998745 systemd[1]: run-containerd-runc-k8s.io-ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403-runc.MouqXO.mount: Deactivated successfully. Apr 12 18:59:30.484339 kubelet[1961]: W0412 18:59:30.484224 1961 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbfb08f_e7ff_4507_a7a6_9377dba900e0.slice/cri-containerd-9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025.scope WatchSource:0}: task 9fcd537d00704aadc047c89135db0322c314540d29983a54cefeba98d2e81025 not found: not found Apr 12 18:59:30.542021 kubelet[1961]: E0412 18:59:30.541995 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:30.572344 kubelet[1961]: I0412 18:59:30.572291 1961 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:59:30.572247981 +0000 UTC m=+92.294134634 LastTransitionTime:2024-04-12 18:59:30.572247981 +0000 UTC m=+92.294134634 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:59:31.858845 kubelet[1961]: E0412 18:59:31.858816 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:32.252820 systemd-networkd[1021]: lxc_health: Link UP Apr 12 18:59:32.263712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:59:32.262481 systemd-networkd[1021]: lxc_health: Gained carrier Apr 12 18:59:32.284474 systemd[1]: run-containerd-runc-k8s.io-ea2f1394e9c15e37ec523af5ce2d9b0bf1908b7979bb3fcb9e15f4656ffb6403-runc.NB5axk.mount: Deactivated successfully. Apr 12 18:59:33.590786 kubelet[1961]: W0412 18:59:33.590741 1961 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbfb08f_e7ff_4507_a7a6_9377dba900e0.slice/cri-containerd-536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64.scope WatchSource:0}: task 536bc0a5dd208e53a48b34c34dbbe42cbd31252a3d4e38dc3c8e8633ea311b64 not found: not found Apr 12 18:59:33.860139 kubelet[1961]: E0412 18:59:33.860035 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:33.874106 kubelet[1961]: I0412 18:59:33.873738 1961 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w25wx" podStartSLOduration=8.8737085 podCreationTimestamp="2024-04-12 18:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:59:30.552345148 +0000 UTC m=+92.274231801" watchObservedRunningTime="2024-04-12 18:59:33.8737085 +0000 UTC m=+95.595595143" Apr 12 18:59:33.875489 systemd-networkd[1021]: lxc_health: Gained IPv6LL Apr 12 18:59:34.358136 kubelet[1961]: E0412 18:59:34.358102 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:34.550016 kubelet[1961]: E0412 18:59:34.549987 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:36.696303 kubelet[1961]: W0412 18:59:36.696265 1961 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbfb08f_e7ff_4507_a7a6_9377dba900e0.slice/cri-containerd-c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31.scope WatchSource:0}: task c7117fb0e5e4f95a4f88fae457eaa1a71a59c882ff2b26ed8b437d7f21351f31 not found: not found Apr 12 18:59:38.570711 sshd[3759]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:38.572982 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:45162.service: Deactivated successfully. Apr 12 18:59:38.573649 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:59:38.574168 systemd-logind[1112]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:59:38.574786 systemd-logind[1112]: Removed session 26. Apr 12 18:59:39.358708 kubelet[1961]: E0412 18:59:39.358671 1961 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 12 18:59:39.803071 kubelet[1961]: W0412 18:59:39.803034 1961 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbfb08f_e7ff_4507_a7a6_9377dba900e0.slice/cri-containerd-28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933.scope WatchSource:0}: task 28aa410c8c05cae5f0643f3d35e54606c5d645adf3bbae8ccccb9f1161db0933 not found: not found