Jul 2 07:51:33.794742 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:51:33.794761 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:51:33.794769 kernel: BIOS-provided physical RAM map: Jul 2 07:51:33.794774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 07:51:33.794779 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 07:51:33.794785 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 07:51:33.794791 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 07:51:33.794797 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 07:51:33.794803 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 07:51:33.794809 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 07:51:33.794814 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 07:51:33.794820 kernel: NX (Execute Disable) protection: active Jul 2 07:51:33.794825 kernel: SMBIOS 2.8 present. Jul 2 07:51:33.794831 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 07:51:33.794839 kernel: Hypervisor detected: KVM Jul 2 07:51:33.794845 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:51:33.794851 kernel: kvm-clock: cpu 0, msr 70192001, primary cpu clock Jul 2 07:51:33.794856 kernel: kvm-clock: using sched offset of 2362459205 cycles Jul 2 07:51:33.794863 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:51:33.794869 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:51:33.794875 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:51:33.794881 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:51:33.794888 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 07:51:33.794895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:51:33.794901 kernel: Using GB pages for direct mapping Jul 2 07:51:33.794907 kernel: ACPI: Early table checksum verification disabled Jul 2 07:51:33.794913 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 07:51:33.794919 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794925 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794931 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794937 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 07:51:33.794943 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794950 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794956 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:51:33.794962 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 07:51:33.794968 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 07:51:33.794974 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 07:51:33.794980 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 07:51:33.794986 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 07:51:33.794992 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 07:51:33.795001 kernel: No NUMA configuration found Jul 2 07:51:33.795008 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 07:51:33.795014 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 07:51:33.795021 kernel: Zone ranges: Jul 2 07:51:33.795027 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:51:33.795033 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 07:51:33.795041 kernel: Normal empty Jul 2 07:51:33.795047 kernel: Movable zone start for each node Jul 2 07:51:33.795053 kernel: Early memory node ranges Jul 2 07:51:33.795060 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 07:51:33.795066 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 07:51:33.795073 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 07:51:33.795079 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:51:33.795085 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 07:51:33.795092 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 07:51:33.795099 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 07:51:33.795106 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:51:33.795112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:51:33.795118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:51:33.795125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:51:33.795131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:51:33.795138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:51:33.795144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:51:33.795151 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:51:33.795158 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:51:33.795164 kernel: TSC deadline timer available Jul 2 07:51:33.795171 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:51:33.795177 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:51:33.795184 kernel: kvm-guest: setup PV sched yield Jul 2 07:51:33.795190 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 07:51:33.795196 kernel: Booting paravirtualized kernel on KVM Jul 2 07:51:33.795203 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:51:33.795210 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:51:33.795216 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:51:33.795223 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:51:33.795229 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:51:33.795236 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:51:33.795242 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 2 07:51:33.795248 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:51:33.795255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:51:33.795261 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 07:51:33.795268 kernel: Policy zone: DMA32 Jul 2 07:51:33.795275 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:51:33.795283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:51:33.795290 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:51:33.795297 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:51:33.795303 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:51:33.795310 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 134792K reserved, 0K cma-reserved) Jul 2 07:51:33.795316 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:51:33.795323 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:51:33.795329 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:51:33.795337 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:51:33.795343 kernel: rcu: RCU event tracing is enabled. Jul 2 07:51:33.795350 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:51:33.795357 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:51:33.795363 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:51:33.795370 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:51:33.795376 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:51:33.795383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:51:33.795389 kernel: random: crng init done Jul 2 07:51:33.795397 kernel: Console: colour VGA+ 80x25 Jul 2 07:51:33.795403 kernel: printk: console [ttyS0] enabled Jul 2 07:51:33.795410 kernel: ACPI: Core revision 20210730 Jul 2 07:51:33.795416 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:51:33.795423 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:51:33.795429 kernel: x2apic enabled Jul 2 07:51:33.795435 kernel: Switched APIC routing to physical x2apic. Jul 2 07:51:33.795442 kernel: kvm-guest: setup PV IPIs Jul 2 07:51:33.795448 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:51:33.795456 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:51:33.795462 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:51:33.795469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:51:33.795475 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:51:33.795482 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:51:33.795488 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:51:33.795495 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:51:33.795501 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:51:33.795508 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:51:33.795519 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:51:33.795526 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:51:33.795543 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:51:33.795551 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:51:33.795558 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:51:33.795565 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:51:33.795571 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:51:33.795578 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:51:33.795585 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:51:33.795593 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:51:33.795600 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:51:33.795606 kernel: LSM: Security Framework initializing Jul 2 07:51:33.795613 kernel: SELinux: Initializing. Jul 2 07:51:33.795620 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:51:33.795627 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:51:33.795634 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:51:33.795642 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:51:33.795648 kernel: ... version: 0 Jul 2 07:51:33.795655 kernel: ... bit width: 48 Jul 2 07:51:33.795662 kernel: ... generic registers: 6 Jul 2 07:51:33.795669 kernel: ... value mask: 0000ffffffffffff Jul 2 07:51:33.795676 kernel: ... max period: 00007fffffffffff Jul 2 07:51:33.795683 kernel: ... fixed-purpose events: 0 Jul 2 07:51:33.795689 kernel: ... event mask: 000000000000003f Jul 2 07:51:33.795696 kernel: signal: max sigframe size: 1776 Jul 2 07:51:33.795703 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:51:33.795721 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:51:33.795728 kernel: x86: Booting SMP configuration: Jul 2 07:51:33.795735 kernel: .... node #0, CPUs: #1 Jul 2 07:51:33.795741 kernel: kvm-clock: cpu 1, msr 70192041, secondary cpu clock Jul 2 07:51:33.795748 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:51:33.795755 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 2 07:51:33.795762 kernel: #2 Jul 2 07:51:33.795769 kernel: kvm-clock: cpu 2, msr 70192081, secondary cpu clock Jul 2 07:51:33.795776 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:51:33.795783 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 2 07:51:33.795790 kernel: #3 Jul 2 07:51:33.795797 kernel: kvm-clock: cpu 3, msr 701920c1, secondary cpu clock Jul 2 07:51:33.795803 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:51:33.795810 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 2 07:51:33.795817 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:51:33.795824 kernel: smpboot: Max logical packages: 1 Jul 2 07:51:33.795831 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:51:33.795837 kernel: devtmpfs: initialized Jul 2 07:51:33.795846 kernel: x86/mm: Memory block size: 128MB Jul 2 07:51:33.795853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:51:33.795860 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:51:33.795866 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:51:33.795873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:51:33.795880 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:51:33.795887 kernel: audit: type=2000 audit(1719906694.351:1): state=initialized audit_enabled=0 res=1 Jul 2 07:51:33.795893 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:51:33.795900 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:51:33.795908 kernel: cpuidle: using governor menu Jul 2 07:51:33.795915 kernel: ACPI: bus type PCI registered Jul 2 07:51:33.795921 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:51:33.795928 kernel: dca service started, version 1.12.1 Jul 2 07:51:33.795935 kernel: PCI: Using configuration type 1 for base access Jul 2 07:51:33.795942 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:51:33.795949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:51:33.795956 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:51:33.795963 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:51:33.795971 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:51:33.795977 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:51:33.795984 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:51:33.795991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:51:33.795998 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:51:33.796004 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:51:33.796011 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:51:33.796018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:51:33.796025 kernel: ACPI: Interpreter enabled Jul 2 07:51:33.796032 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:51:33.796039 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:51:33.796046 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:51:33.796053 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:51:33.796060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:51:33.796172 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:51:33.796185 kernel: acpiphp: Slot [3] registered Jul 2 07:51:33.796191 kernel: acpiphp: Slot [4] registered Jul 2 07:51:33.796199 kernel: acpiphp: Slot [5] registered Jul 2 07:51:33.796206 kernel: acpiphp: Slot [6] registered Jul 2 07:51:33.796213 kernel: acpiphp: Slot [7] registered Jul 2 07:51:33.796220 kernel: acpiphp: Slot [8] registered Jul 2 07:51:33.796227 kernel: acpiphp: Slot [9] registered Jul 2 07:51:33.796233 kernel: acpiphp: Slot [10] registered Jul 2 07:51:33.796240 kernel: acpiphp: Slot [11] registered Jul 2 07:51:33.796247 kernel: acpiphp: Slot [12] registered Jul 2 07:51:33.796253 kernel: acpiphp: Slot [13] registered Jul 2 07:51:33.796260 kernel: acpiphp: Slot [14] registered Jul 2 07:51:33.796268 kernel: acpiphp: Slot [15] registered Jul 2 07:51:33.796274 kernel: acpiphp: Slot [16] registered Jul 2 07:51:33.796281 kernel: acpiphp: Slot [17] registered Jul 2 07:51:33.796288 kernel: acpiphp: Slot [18] registered Jul 2 07:51:33.796294 kernel: acpiphp: Slot [19] registered Jul 2 07:51:33.796301 kernel: acpiphp: Slot [20] registered Jul 2 07:51:33.796307 kernel: acpiphp: Slot [21] registered Jul 2 07:51:33.796314 kernel: acpiphp: Slot [22] registered Jul 2 07:51:33.796321 kernel: acpiphp: Slot [23] registered Jul 2 07:51:33.796329 kernel: acpiphp: Slot [24] registered Jul 2 07:51:33.796335 kernel: acpiphp: Slot [25] registered Jul 2 07:51:33.796342 kernel: acpiphp: Slot [26] registered Jul 2 07:51:33.796349 kernel: acpiphp: Slot [27] registered Jul 2 07:51:33.796355 kernel: acpiphp: Slot [28] registered Jul 2 07:51:33.796362 kernel: acpiphp: Slot [29] registered Jul 2 07:51:33.796369 kernel: acpiphp: Slot [30] registered Jul 2 07:51:33.796375 kernel: acpiphp: Slot [31] registered Jul 2 07:51:33.796382 kernel: PCI host bridge to bus 0000:00 Jul 2 07:51:33.796459 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:51:33.796525 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:51:33.796632 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:51:33.796693 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:51:33.796765 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 07:51:33.796825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:51:33.796914 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:51:33.796994 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:51:33.797073 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:51:33.797145 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:51:33.797216 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:51:33.797286 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:51:33.797357 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:51:33.797426 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:51:33.797505 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:51:33.797592 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 07:51:33.797660 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 07:51:33.797745 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:51:33.797812 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 07:51:33.797880 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 07:51:33.797950 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 07:51:33.798017 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:51:33.798091 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:51:33.798162 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:51:33.798234 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 07:51:33.798303 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 07:51:33.798378 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:51:33.798448 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:51:33.798517 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 07:51:33.798600 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 07:51:33.798674 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:51:33.798754 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:51:33.798823 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 07:51:33.798892 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 07:51:33.798962 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 07:51:33.798971 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:51:33.798978 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:51:33.798985 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:51:33.798992 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:51:33.798999 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:51:33.799006 kernel: iommu: Default domain type: Translated Jul 2 07:51:33.799013 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:51:33.799081 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:51:33.799151 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:51:33.799219 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:51:33.799228 kernel: vgaarb: loaded Jul 2 07:51:33.799235 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:51:33.799242 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:51:33.799249 kernel: PTP clock support registered Jul 2 07:51:33.799256 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:51:33.799263 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:51:33.799271 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 07:51:33.799278 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 07:51:33.799285 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:51:33.799292 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:51:33.799299 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:51:33.799305 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:51:33.799312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:51:33.799319 kernel: pnp: PnP ACPI init Jul 2 07:51:33.799404 kernel: pnp 00:02: [dma 2] Jul 2 07:51:33.799416 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:51:33.799423 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:51:33.799430 kernel: NET: Registered PF_INET protocol family Jul 2 07:51:33.799437 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:51:33.799444 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:51:33.799451 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:51:33.799458 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:51:33.799465 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:51:33.799473 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:51:33.799480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:51:33.799487 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:51:33.799494 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:51:33.799500 kernel: NET: Registered PF_XDP protocol family Jul 2 07:51:33.799591 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:51:33.799654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:51:33.799722 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:51:33.799783 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:51:33.799849 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 07:51:33.799920 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:51:33.799989 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:51:33.800057 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:51:33.800067 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:51:33.800074 kernel: Initialise system trusted keyrings Jul 2 07:51:33.800081 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:51:33.800088 kernel: Key type asymmetric registered Jul 2 07:51:33.800096 kernel: Asymmetric key parser 'x509' registered Jul 2 07:51:33.800103 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:51:33.800110 kernel: io scheduler mq-deadline registered Jul 2 07:51:33.800117 kernel: io scheduler kyber registered Jul 2 07:51:33.800124 kernel: io scheduler bfq registered Jul 2 07:51:33.800131 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:51:33.800138 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:51:33.800145 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:51:33.800152 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:51:33.800160 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:51:33.800167 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:51:33.800174 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:51:33.800181 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:51:33.800188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:51:33.800195 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:51:33.800266 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:51:33.800331 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:51:33.800397 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:51:33 UTC (1719906693) Jul 2 07:51:33.800460 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:51:33.800470 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:51:33.800477 kernel: Segment Routing with IPv6 Jul 2 07:51:33.800484 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:51:33.800491 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:51:33.800498 kernel: Key type dns_resolver registered Jul 2 07:51:33.800505 kernel: IPI shorthand broadcast: enabled Jul 2 07:51:33.800512 kernel: sched_clock: Marking stable (431258947, 97141998)->(541320358, -12919413) Jul 2 07:51:33.800520 kernel: registered taskstats version 1 Jul 2 07:51:33.800527 kernel: Loading compiled-in X.509 certificates Jul 2 07:51:33.800545 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:51:33.800552 kernel: Key type .fscrypt registered Jul 2 07:51:33.800559 kernel: Key type fscrypt-provisioning registered Jul 2 07:51:33.800566 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:51:33.800573 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:51:33.800579 kernel: ima: No architecture policies found Jul 2 07:51:33.800587 kernel: clk: Disabling unused clocks Jul 2 07:51:33.800595 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:51:33.800602 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:51:33.800609 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:51:33.800616 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:51:33.800623 kernel: Run /init as init process Jul 2 07:51:33.800630 kernel: with arguments: Jul 2 07:51:33.800637 kernel: /init Jul 2 07:51:33.800652 kernel: with environment: Jul 2 07:51:33.800660 kernel: HOME=/ Jul 2 07:51:33.800668 kernel: TERM=linux Jul 2 07:51:33.800676 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:51:33.800685 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:51:33.800694 systemd[1]: Detected virtualization kvm. Jul 2 07:51:33.800702 systemd[1]: Detected architecture x86-64. Jul 2 07:51:33.800716 systemd[1]: Running in initrd. Jul 2 07:51:33.800724 systemd[1]: No hostname configured, using default hostname. Jul 2 07:51:33.800732 systemd[1]: Hostname set to . Jul 2 07:51:33.800740 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:51:33.800748 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:51:33.800755 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:51:33.800763 systemd[1]: Reached target cryptsetup.target. Jul 2 07:51:33.800770 systemd[1]: Reached target paths.target. Jul 2 07:51:33.800777 systemd[1]: Reached target slices.target. Jul 2 07:51:33.800785 systemd[1]: Reached target swap.target. Jul 2 07:51:33.800793 systemd[1]: Reached target timers.target. Jul 2 07:51:33.800801 systemd[1]: Listening on iscsid.socket. Jul 2 07:51:33.800809 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:51:33.800816 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:51:33.800824 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:51:33.800832 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:51:33.800839 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:51:33.800847 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:51:33.800855 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:51:33.800864 systemd[1]: Reached target sockets.target. Jul 2 07:51:33.800871 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:51:33.800879 systemd[1]: Finished network-cleanup.service. Jul 2 07:51:33.800886 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:51:33.800894 systemd[1]: Starting systemd-journald.service... Jul 2 07:51:33.800903 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:51:33.800910 systemd[1]: Starting systemd-resolved.service... Jul 2 07:51:33.800918 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:51:33.800925 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:51:33.800933 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:51:33.800941 kernel: audit: type=1130 audit(1719906693.793:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.800949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:51:33.800959 systemd-journald[198]: Journal started Jul 2 07:51:33.800996 systemd-journald[198]: Runtime Journal (/run/log/journal/2d012d3a924d42d1b167cd9ea1fca648) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:51:33.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.801846 systemd[1]: Started systemd-journald.service. Jul 2 07:51:33.802359 systemd-modules-load[199]: Inserted module 'overlay' Jul 2 07:51:33.852314 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:51:33.852346 kernel: audit: type=1130 audit(1719906693.834:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.852358 kernel: audit: type=1130 audit(1719906693.837:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.852368 kernel: audit: type=1130 audit(1719906693.841:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.852378 kernel: audit: type=1130 audit(1719906693.845:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.852388 kernel: Bridge firewalling registered Jul 2 07:51:33.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.815209 systemd-resolved[200]: Positive Trust Anchors: Jul 2 07:51:33.815217 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:51:33.815244 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:51:33.817364 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 2 07:51:33.835524 systemd[1]: Started systemd-resolved.service. Jul 2 07:51:33.838490 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:51:33.842458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:51:33.845968 systemd[1]: Reached target nss-lookup.target. Jul 2 07:51:33.850668 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 2 07:51:33.851377 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:51:33.866138 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:51:33.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.868365 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:51:33.872957 kernel: audit: type=1130 audit(1719906693.867:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.872986 kernel: SCSI subsystem initialized Jul 2 07:51:33.877089 dracut-cmdline[215]: dracut-dracut-053 Jul 2 07:51:33.879154 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:51:33.887696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:51:33.887744 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:51:33.887754 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:51:33.890349 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 2 07:51:33.891092 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:51:33.895766 kernel: audit: type=1130 audit(1719906693.891:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.894915 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:51:33.902388 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:51:33.906711 kernel: audit: type=1130 audit(1719906693.902:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:33.943567 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:51:33.959561 kernel: iscsi: registered transport (tcp) Jul 2 07:51:33.980569 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:51:33.980617 kernel: QLogic iSCSI HBA Driver Jul 2 07:51:34.001843 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:51:34.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.004292 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:51:34.007717 kernel: audit: type=1130 audit(1719906694.002:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.048556 kernel: raid6: avx2x4 gen() 30963 MB/s Jul 2 07:51:34.065550 kernel: raid6: avx2x4 xor() 8431 MB/s Jul 2 07:51:34.082555 kernel: raid6: avx2x2 gen() 32548 MB/s Jul 2 07:51:34.099552 kernel: raid6: avx2x2 xor() 19229 MB/s Jul 2 07:51:34.116550 kernel: raid6: avx2x1 gen() 26491 MB/s Jul 2 07:51:34.133549 kernel: raid6: avx2x1 xor() 15339 MB/s Jul 2 07:51:34.150550 kernel: raid6: sse2x4 gen() 14800 MB/s Jul 2 07:51:34.167550 kernel: raid6: sse2x4 xor() 7650 MB/s Jul 2 07:51:34.184557 kernel: raid6: sse2x2 gen() 16381 MB/s Jul 2 07:51:34.201551 kernel: raid6: sse2x2 xor() 9835 MB/s Jul 2 07:51:34.218551 kernel: raid6: sse2x1 gen() 12585 MB/s Jul 2 07:51:34.235942 kernel: raid6: sse2x1 xor() 7812 MB/s Jul 2 07:51:34.235954 kernel: raid6: using algorithm avx2x2 gen() 32548 MB/s Jul 2 07:51:34.235963 kernel: raid6: .... xor() 19229 MB/s, rmw enabled Jul 2 07:51:34.236666 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:51:34.248553 kernel: xor: automatically using best checksumming function avx Jul 2 07:51:34.336558 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:51:34.344735 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:51:34.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.345000 audit: BPF prog-id=7 op=LOAD Jul 2 07:51:34.345000 audit: BPF prog-id=8 op=LOAD Jul 2 07:51:34.347009 systemd[1]: Starting systemd-udevd.service... Jul 2 07:51:34.358781 systemd-udevd[397]: Using default interface naming scheme 'v252'. Jul 2 07:51:34.362569 systemd[1]: Started systemd-udevd.service. Jul 2 07:51:34.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.365115 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:51:34.375812 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jul 2 07:51:34.401150 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:51:34.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.402699 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:51:34.437501 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:51:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:34.473551 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:51:34.480732 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:51:34.485829 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:51:34.485852 kernel: AES CTR mode by8 optimization enabled Jul 2 07:51:34.485862 kernel: libata version 3.00 loaded. Jul 2 07:51:34.493001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:51:34.493023 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:51:34.493137 kernel: GPT:9289727 != 19775487 Jul 2 07:51:34.493146 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:51:34.493157 kernel: scsi host0: ata_piix Jul 2 07:51:34.493248 kernel: GPT:9289727 != 19775487 Jul 2 07:51:34.493257 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:51:34.493265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:51:34.493799 kernel: scsi host1: ata_piix Jul 2 07:51:34.496663 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:51:34.496696 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:51:34.511554 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jul 2 07:51:34.514671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:51:34.538249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:51:34.547494 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:51:34.551446 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:51:34.555333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:51:34.556985 systemd[1]: Starting disk-uuid.service... Jul 2 07:51:34.565341 disk-uuid[514]: Primary Header is updated. Jul 2 07:51:34.565341 disk-uuid[514]: Secondary Entries is updated. Jul 2 07:51:34.565341 disk-uuid[514]: Secondary Header is updated. Jul 2 07:51:34.568929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:51:34.572549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:51:34.575551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:51:34.650596 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:51:34.652581 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:51:34.681870 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:51:34.682154 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:51:34.699593 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:51:35.573232 disk-uuid[515]: The operation has completed successfully. Jul 2 07:51:35.574510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:51:35.595230 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:51:35.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.595310 systemd[1]: Finished disk-uuid.service. Jul 2 07:51:35.599441 systemd[1]: Starting verity-setup.service... Jul 2 07:51:35.611563 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:51:35.629986 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:51:35.632896 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:51:35.634605 systemd[1]: Finished verity-setup.service. Jul 2 07:51:35.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.692399 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:51:35.693806 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:51:35.692958 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:51:35.693819 systemd[1]: Starting ignition-setup.service... Jul 2 07:51:35.695891 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:51:35.706633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:51:35.706676 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:51:35.706686 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:51:35.714997 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:51:35.723118 systemd[1]: Finished ignition-setup.service. Jul 2 07:51:35.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.724359 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:51:35.757269 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:51:35.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.758000 audit: BPF prog-id=9 op=LOAD Jul 2 07:51:35.759718 systemd[1]: Starting systemd-networkd.service... Jul 2 07:51:35.764153 ignition[647]: Ignition 2.14.0 Jul 2 07:51:35.764166 ignition[647]: Stage: fetch-offline Jul 2 07:51:35.764284 ignition[647]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:35.764296 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:35.764431 ignition[647]: parsed url from cmdline: "" Jul 2 07:51:35.764435 ignition[647]: no config URL provided Jul 2 07:51:35.764442 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:51:35.764450 ignition[647]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:51:35.764472 ignition[647]: op(1): [started] loading QEMU firmware config module Jul 2 07:51:35.764478 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:51:35.768501 ignition[647]: op(1): [finished] loading QEMU firmware config module Jul 2 07:51:35.788066 systemd-networkd[707]: lo: Link UP Jul 2 07:51:35.788075 systemd-networkd[707]: lo: Gained carrier Jul 2 07:51:35.790018 systemd-networkd[707]: Enumeration completed Jul 2 07:51:35.790893 systemd[1]: Started systemd-networkd.service. Jul 2 07:51:35.792758 systemd[1]: Reached target network.target. Jul 2 07:51:35.792841 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:51:35.794417 systemd-networkd[707]: eth0: Link UP Jul 2 07:51:35.794420 systemd-networkd[707]: eth0: Gained carrier Jul 2 07:51:35.794697 systemd[1]: Starting iscsiuio.service... Jul 2 07:51:35.800555 systemd[1]: Started iscsiuio.service. Jul 2 07:51:35.802748 systemd[1]: Starting iscsid.service... Jul 2 07:51:35.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.806619 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:51:35.806619 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:51:35.806619 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:51:35.806619 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:51:35.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.817066 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:51:35.817066 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:51:35.808268 systemd[1]: Started iscsid.service. Jul 2 07:51:35.821126 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:51:35.830341 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:51:35.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.832375 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:51:35.834254 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:51:35.836184 systemd[1]: Reached target remote-fs.target. Jul 2 07:51:35.838455 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:51:35.845693 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:51:35.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.849840 ignition[647]: parsing config with SHA512: 5a6251b170efb3062be37adb704547c21f82af56f54e2e5c8ecc4b3ede9e1d81d02428793998585896025b5084bd796da8aa589c20c3befdbdcf7542bfc09319 Jul 2 07:51:35.855897 unknown[647]: fetched base config from "system" Jul 2 07:51:35.855907 unknown[647]: fetched user config from "qemu" Jul 2 07:51:35.856354 ignition[647]: fetch-offline: fetch-offline passed Jul 2 07:51:35.856396 ignition[647]: Ignition finished successfully Jul 2 07:51:35.860346 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:51:35.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.860924 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:51:35.861720 systemd[1]: Starting ignition-kargs.service... Jul 2 07:51:35.864593 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:51:35.872744 ignition[728]: Ignition 2.14.0 Jul 2 07:51:35.872754 ignition[728]: Stage: kargs Jul 2 07:51:35.872850 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:35.872865 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:35.873993 ignition[728]: kargs: kargs passed Jul 2 07:51:35.874027 ignition[728]: Ignition finished successfully Jul 2 07:51:35.878288 systemd[1]: Finished ignition-kargs.service. Jul 2 07:51:35.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.880626 systemd[1]: Starting ignition-disks.service... Jul 2 07:51:35.888194 ignition[734]: Ignition 2.14.0 Jul 2 07:51:35.888202 ignition[734]: Stage: disks Jul 2 07:51:35.888276 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:35.888284 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:35.889273 ignition[734]: disks: disks passed Jul 2 07:51:35.889302 ignition[734]: Ignition finished successfully Jul 2 07:51:35.893255 systemd[1]: Finished ignition-disks.service. Jul 2 07:51:35.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.894881 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:51:35.895350 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:51:35.897055 systemd[1]: Reached target local-fs.target. Jul 2 07:51:35.899598 systemd[1]: Reached target sysinit.target. Jul 2 07:51:35.900073 systemd[1]: Reached target basic.target. Jul 2 07:51:35.902603 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:51:35.913613 systemd-fsck[742]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:51:35.919072 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:51:35.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.922014 systemd[1]: Mounting sysroot.mount... Jul 2 07:51:35.930366 systemd[1]: Mounted sysroot.mount. Jul 2 07:51:35.930835 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:51:35.931739 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:51:35.933443 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:51:35.934139 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:51:35.934174 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:51:35.934196 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:51:35.941416 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:51:35.942855 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:51:35.948373 initrd-setup-root[752]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:51:35.952419 initrd-setup-root[760]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:51:35.956199 initrd-setup-root[768]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:51:35.959784 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:51:35.983709 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:51:35.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.984737 systemd[1]: Starting ignition-mount.service... Jul 2 07:51:35.986341 systemd[1]: Starting sysroot-boot.service... Jul 2 07:51:35.989767 bash[793]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:51:35.996905 ignition[794]: INFO : Ignition 2.14.0 Jul 2 07:51:35.996905 ignition[794]: INFO : Stage: mount Jul 2 07:51:35.999237 ignition[794]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:35.999237 ignition[794]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:35.999237 ignition[794]: INFO : mount: mount passed Jul 2 07:51:35.999237 ignition[794]: INFO : Ignition finished successfully Jul 2 07:51:36.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:36.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:35.999298 systemd[1]: Finished ignition-mount.service. Jul 2 07:51:36.003730 systemd[1]: Finished sysroot-boot.service. Jul 2 07:51:36.643523 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:51:36.650302 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jul 2 07:51:36.650328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:51:36.650343 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:51:36.651882 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:51:36.654870 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:51:36.657213 systemd[1]: Starting ignition-files.service... Jul 2 07:51:36.669955 ignition[823]: INFO : Ignition 2.14.0 Jul 2 07:51:36.669955 ignition[823]: INFO : Stage: files Jul 2 07:51:36.671573 ignition[823]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:36.671573 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:36.674615 ignition[823]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:51:36.675979 ignition[823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:51:36.675979 ignition[823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:51:36.679166 ignition[823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:51:36.680598 ignition[823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:51:36.682401 unknown[823]: wrote ssh authorized keys file for user: core Jul 2 07:51:36.683440 ignition[823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:51:36.685035 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:51:36.686728 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:51:36.688378 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:51:36.690197 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:51:36.746003 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:51:36.802589 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:51:36.804658 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:51:36.804658 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:51:37.247497 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 07:51:37.304702 systemd-networkd[707]: eth0: Gained IPv6LL Jul 2 07:51:37.354146 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:51:37.356027 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:51:37.357716 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:51:37.357716 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:51:37.361066 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:51:37.362722 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:51:37.364423 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:51:37.366147 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:51:37.367867 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:51:37.369637 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:51:37.371381 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:51:37.373078 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:51:37.375468 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:51:37.377868 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:51:37.379904 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:51:37.675397 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 07:51:38.160764 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:51:38.160764 ignition[823]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:51:38.165503 ignition[823]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:51:38.193747 ignition[823]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:51:38.193747 ignition[823]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:51:38.193747 ignition[823]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:51:38.193747 ignition[823]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:51:38.193747 ignition[823]: INFO : files: files passed Jul 2 07:51:38.193747 ignition[823]: INFO : Ignition finished successfully Jul 2 07:51:38.219426 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 07:51:38.219448 kernel: audit: type=1130 audit(1719906698.196:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.219459 kernel: audit: type=1130 audit(1719906698.207:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.219470 kernel: audit: type=1130 audit(1719906698.211:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.219479 kernel: audit: type=1131 audit(1719906698.211:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.194221 systemd[1]: Finished ignition-files.service. Jul 2 07:51:38.197099 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:51:38.202387 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:51:38.224113 initrd-setup-root-after-ignition[846]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:51:38.203215 systemd[1]: Starting ignition-quench.service... Jul 2 07:51:38.226462 initrd-setup-root-after-ignition[848]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:51:38.204906 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:51:38.207621 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:51:38.207682 systemd[1]: Finished ignition-quench.service. Jul 2 07:51:38.212105 systemd[1]: Reached target ignition-complete.target. Jul 2 07:51:38.239038 kernel: audit: type=1130 audit(1719906698.231:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.239055 kernel: audit: type=1131 audit(1719906698.231:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.219964 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:51:38.230226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:51:38.230293 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:51:38.231829 systemd[1]: Reached target initrd-fs.target. Jul 2 07:51:38.239048 systemd[1]: Reached target initrd.target. Jul 2 07:51:38.239853 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:51:38.240442 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:51:38.249423 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:51:38.254494 kernel: audit: type=1130 audit(1719906698.249:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.250837 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:51:38.258776 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:51:38.297802 kernel: audit: type=1131 audit(1719906698.258:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.297828 kernel: audit: type=1131 audit(1719906698.264:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.297838 kernel: audit: type=1131 audit(1719906698.264:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.259256 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:51:38.259428 systemd[1]: Stopped target timers.target. Jul 2 07:51:38.300226 ignition[863]: INFO : Ignition 2.14.0 Jul 2 07:51:38.300226 ignition[863]: INFO : Stage: umount Jul 2 07:51:38.300226 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:51:38.300226 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:51:38.300226 ignition[863]: INFO : umount: umount passed Jul 2 07:51:38.300226 ignition[863]: INFO : Ignition finished successfully Jul 2 07:51:38.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.259598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:51:38.308000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:51:38.259683 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:51:38.259996 systemd[1]: Stopped target initrd.target. Jul 2 07:51:38.263078 systemd[1]: Stopped target basic.target. Jul 2 07:51:38.263234 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:51:38.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.263399 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:51:38.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.263573 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:51:38.263742 systemd[1]: Stopped target remote-fs.target. Jul 2 07:51:38.263903 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:51:38.264076 systemd[1]: Stopped target sysinit.target. Jul 2 07:51:38.264241 systemd[1]: Stopped target local-fs.target. Jul 2 07:51:38.264405 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:51:38.264593 systemd[1]: Stopped target swap.target. Jul 2 07:51:38.264718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:51:38.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.264799 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:51:38.264960 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:51:38.268036 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:51:38.268116 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:51:38.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.268246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:51:38.268326 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:51:38.271418 systemd[1]: Stopped target paths.target. Jul 2 07:51:38.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.271491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:51:38.279569 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:51:38.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.279810 systemd[1]: Stopped target slices.target. Jul 2 07:51:38.279962 systemd[1]: Stopped target sockets.target. Jul 2 07:51:38.280117 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:51:38.280179 systemd[1]: Closed iscsid.socket. Jul 2 07:51:38.280311 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:51:38.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.280393 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:51:38.280820 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:51:38.280899 systemd[1]: Stopped ignition-files.service. Jul 2 07:51:38.281765 systemd[1]: Stopping ignition-mount.service... Jul 2 07:51:38.282235 systemd[1]: Stopping iscsiuio.service... Jul 2 07:51:38.282396 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:51:38.282504 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:51:38.283429 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:51:38.283768 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:51:38.283880 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:51:38.284047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:51:38.284149 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:51:38.286942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:51:38.287012 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:51:38.287814 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:51:38.287892 systemd[1]: Stopped iscsiuio.service. Jul 2 07:51:38.288290 systemd[1]: Stopped target network.target. Jul 2 07:51:38.288405 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:51:38.288428 systemd[1]: Closed iscsiuio.socket. Jul 2 07:51:38.288806 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:51:38.289024 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:51:38.291303 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:51:38.291366 systemd[1]: Stopped ignition-mount.service. Jul 2 07:51:38.291497 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:51:38.291527 systemd[1]: Stopped ignition-disks.service. Jul 2 07:51:38.291645 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:51:38.291673 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:51:38.291832 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:51:38.291856 systemd[1]: Stopped ignition-setup.service. Jul 2 07:51:38.297286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:51:38.301253 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:51:38.301327 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:51:38.301604 systemd-networkd[707]: eth0: DHCPv6 lease lost Jul 2 07:51:38.372000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:51:38.305978 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:51:38.306047 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:51:38.309165 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:51:38.309188 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:51:38.311155 systemd[1]: Stopping network-cleanup.service... Jul 2 07:51:38.312008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:51:38.312043 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:51:38.313707 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:51:38.313737 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:51:38.315339 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:51:38.315368 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:51:38.317311 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:51:38.321575 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:51:38.323416 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:51:38.323490 systemd[1]: Stopped network-cleanup.service. Jul 2 07:51:38.327474 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:51:38.327628 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:51:38.330127 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:51:38.330157 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:51:38.331954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:51:38.331978 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:51:38.333575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:51:38.333613 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:51:38.335127 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:51:38.335156 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:51:38.336977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:51:38.337005 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:51:38.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.339176 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:51:38.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:38.340258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:51:38.340293 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:51:38.343692 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:51:38.343755 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:51:38.405000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:51:38.405000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:51:38.405000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:51:38.396611 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:51:38.396681 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:51:38.398432 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:51:38.399993 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:51:38.400026 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:51:38.409000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:51:38.409000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:51:38.401013 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:51:38.405903 systemd[1]: Switching root. Jul 2 07:51:38.426084 iscsid[714]: iscsid shutting down. Jul 2 07:51:38.426816 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 2 07:51:38.426861 systemd-journald[198]: Journal stopped Jul 2 07:51:41.091428 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:51:41.091472 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:51:41.091483 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:51:41.091493 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:51:41.091502 kernel: SELinux: policy capability open_perms=1 Jul 2 07:51:41.091526 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:51:41.091554 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:51:41.091564 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:51:41.091573 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:51:41.091582 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:51:41.091592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:51:41.091602 systemd[1]: Successfully loaded SELinux policy in 37.435ms. Jul 2 07:51:41.091622 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.429ms. Jul 2 07:51:41.091636 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:51:41.091647 systemd[1]: Detected virtualization kvm. Jul 2 07:51:41.091659 systemd[1]: Detected architecture x86-64. Jul 2 07:51:41.091671 systemd[1]: Detected first boot. Jul 2 07:51:41.091681 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:51:41.091693 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:51:41.091703 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:51:41.091714 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:51:41.091726 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:51:41.091738 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:51:41.091749 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:51:41.091759 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:51:41.091769 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:51:41.091781 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:51:41.091791 systemd[1]: Created slice system-getty.slice. Jul 2 07:51:41.091802 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:51:41.091812 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:51:41.091822 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:51:41.091833 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:51:41.091843 systemd[1]: Created slice user.slice. Jul 2 07:51:41.091853 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:51:41.091862 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:51:41.091874 systemd[1]: Set up automount boot.automount. Jul 2 07:51:41.091884 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:51:41.091895 systemd[1]: Reached target integritysetup.target. Jul 2 07:51:41.091906 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:51:41.091917 systemd[1]: Reached target remote-fs.target. Jul 2 07:51:41.091928 systemd[1]: Reached target slices.target. Jul 2 07:51:41.091938 systemd[1]: Reached target swap.target. Jul 2 07:51:41.091948 systemd[1]: Reached target torcx.target. Jul 2 07:51:41.091959 systemd[1]: Reached target veritysetup.target. Jul 2 07:51:41.091969 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:51:41.091980 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:51:41.091990 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:51:41.092000 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:51:41.092010 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:51:41.092021 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:51:41.092032 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:51:41.092042 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:51:41.092052 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:51:41.092064 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:51:41.092074 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:51:41.092086 systemd[1]: Mounting media.mount... Jul 2 07:51:41.092097 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:41.092107 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:51:41.092119 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:51:41.092129 systemd[1]: Mounting tmp.mount... Jul 2 07:51:41.092139 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:51:41.092151 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:51:41.092162 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:51:41.092171 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:51:41.092182 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:51:41.092193 systemd[1]: Starting modprobe@drm.service... Jul 2 07:51:41.092203 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:51:41.092213 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:51:41.092223 systemd[1]: Starting modprobe@loop.service... Jul 2 07:51:41.092233 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:51:41.092245 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 07:51:41.092255 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 07:51:41.092266 systemd[1]: Starting systemd-journald.service... Jul 2 07:51:41.092275 kernel: fuse: init (API version 7.34) Jul 2 07:51:41.092285 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:51:41.092295 kernel: loop: module loaded Jul 2 07:51:41.092306 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:51:41.092316 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:51:41.092327 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:51:41.092339 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:41.092350 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:51:41.092360 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:51:41.092370 systemd[1]: Mounted media.mount. Jul 2 07:51:41.092380 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:51:41.092390 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:51:41.092400 systemd[1]: Mounted tmp.mount. Jul 2 07:51:41.092410 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:51:41.092423 systemd-journald[1010]: Journal started Jul 2 07:51:41.092462 systemd-journald[1010]: Runtime Journal (/run/log/journal/2d012d3a924d42d1b167cd9ea1fca648) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:51:41.003000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:51:41.003000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:51:41.089000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:51:41.089000 audit[1010]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff3a5b8200 a2=4000 a3=7fff3a5b829c items=0 ppid=1 pid=1010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:51:41.089000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:51:41.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.094300 systemd[1]: Started systemd-journald.service. Jul 2 07:51:41.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.095962 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:51:41.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.096312 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:51:41.097320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:51:41.097584 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:51:41.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.098574 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:51:41.098802 systemd[1]: Finished modprobe@drm.service. Jul 2 07:51:41.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.099981 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:51:41.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.100951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:51:41.101181 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:51:41.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.102195 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:51:41.102409 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:51:41.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.103373 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:51:41.103771 systemd[1]: Finished modprobe@loop.service. Jul 2 07:51:41.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.104985 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:51:41.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.106373 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:51:41.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.107804 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:51:41.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.108913 systemd[1]: Reached target network-pre.target. Jul 2 07:51:41.110797 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:51:41.112504 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:51:41.113333 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:51:41.114526 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:51:41.118595 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:51:41.119472 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:51:41.120468 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:51:41.121287 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:51:41.122131 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:51:41.124212 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:51:41.127476 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:51:41.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.128431 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:51:41.129373 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:51:41.134137 systemd-journald[1010]: Time spent on flushing to /var/log/journal/2d012d3a924d42d1b167cd9ea1fca648 is 13.650ms for 1039 entries. Jul 2 07:51:41.134137 systemd-journald[1010]: System Journal (/var/log/journal/2d012d3a924d42d1b167cd9ea1fca648) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:51:41.165421 systemd-journald[1010]: Received client request to flush runtime journal. Jul 2 07:51:41.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.131382 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:51:41.165867 udevadm[1047]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:51:41.132942 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:51:41.134122 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:51:41.137585 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:51:41.140326 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:51:41.142315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:51:41.158986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:51:41.166247 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:51:41.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.533351 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:51:41.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.535306 systemd[1]: Starting systemd-udevd.service... Jul 2 07:51:41.551191 systemd-udevd[1060]: Using default interface naming scheme 'v252'. Jul 2 07:51:41.562801 systemd[1]: Started systemd-udevd.service. Jul 2 07:51:41.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.565859 systemd[1]: Starting systemd-networkd.service... Jul 2 07:51:41.578758 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:51:41.584592 systemd[1]: Found device dev-ttyS0.device. Jul 2 07:51:41.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.609908 systemd[1]: Started systemd-userdbd.service. Jul 2 07:51:41.615557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:51:41.621560 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:51:41.623000 audit[1062]: AVC avc: denied { confidentiality } for pid=1062 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:51:41.623000 audit[1062]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564f493a1190 a1=3207c a2=7f247710cbc5 a3=5 items=108 ppid=1060 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:51:41.623000 audit: CWD cwd="/" Jul 2 07:51:41.623000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=1 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=2 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=3 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=4 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=5 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=6 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=7 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=8 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=9 name=(null) inode=15398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=10 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=11 name=(null) inode=15399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=12 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=13 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=14 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=15 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=16 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=17 name=(null) inode=15402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=18 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=19 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=20 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=21 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=22 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=23 name=(null) inode=15405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=24 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=25 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=26 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=27 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=28 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=29 name=(null) inode=15408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=30 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=31 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=32 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=33 name=(null) inode=15410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=34 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=35 name=(null) inode=15411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=36 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=37 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=38 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=39 name=(null) inode=15413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=40 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=41 name=(null) inode=15414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=42 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=43 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=44 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=45 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=46 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=47 name=(null) inode=15417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=48 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=49 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=50 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=51 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=52 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=53 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=55 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=56 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=57 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=58 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=59 name=(null) inode=15423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=60 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=61 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=62 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=63 name=(null) inode=15425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=64 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=65 name=(null) inode=15426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=66 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=67 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=68 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=69 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=70 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=71 name=(null) inode=15429 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=72 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=73 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=74 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=75 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=76 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=77 name=(null) inode=15432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=78 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=79 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=80 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=81 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=82 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=83 name=(null) inode=15435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=84 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=85 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=86 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=87 name=(null) inode=15437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=88 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=89 name=(null) inode=15438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=90 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=91 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=92 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=93 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=94 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=95 name=(null) inode=15441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=96 name=(null) inode=15421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=97 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=98 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=99 name=(null) inode=15443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=100 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=101 name=(null) inode=15444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=102 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=103 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=104 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=105 name=(null) inode=15446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=106 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PATH item=107 name=(null) inode=15447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:51:41.623000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:51:41.654554 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 07:51:41.656355 systemd-networkd[1072]: lo: Link UP Jul 2 07:51:41.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.656364 systemd-networkd[1072]: lo: Gained carrier Jul 2 07:51:41.656795 systemd-networkd[1072]: Enumeration completed Jul 2 07:51:41.656869 systemd-networkd[1072]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:51:41.657767 systemd-networkd[1072]: eth0: Link UP Jul 2 07:51:41.657770 systemd-networkd[1072]: eth0: Gained carrier Jul 2 07:51:41.659287 systemd[1]: Started systemd-networkd.service. Jul 2 07:51:41.673696 systemd-networkd[1072]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:51:41.678655 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:51:41.684214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:51:41.746557 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:51:41.758113 kernel: kvm: Nested Virtualization enabled Jul 2 07:51:41.758167 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:51:41.758182 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:51:41.758807 kernel: SVM: Virtual GIF supported Jul 2 07:51:41.775571 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:51:41.797027 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:51:41.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.799517 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:51:41.807688 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:51:41.836479 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:51:41.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.837540 systemd[1]: Reached target cryptsetup.target. Jul 2 07:51:41.839368 systemd[1]: Starting lvm2-activation.service... Jul 2 07:51:41.842437 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:51:41.868081 systemd[1]: Finished lvm2-activation.service. Jul 2 07:51:41.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.869008 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:51:41.869874 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:51:41.869893 systemd[1]: Reached target local-fs.target. Jul 2 07:51:41.870714 systemd[1]: Reached target machines.target. Jul 2 07:51:41.872386 systemd[1]: Starting ldconfig.service... Jul 2 07:51:41.873337 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:51:41.873371 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:41.874236 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:51:41.875946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:51:41.877833 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:51:41.879829 systemd[1]: Starting systemd-sysext.service... Jul 2 07:51:41.881018 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Jul 2 07:51:41.881921 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:51:41.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.888797 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:51:41.890036 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:51:41.892389 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:51:41.892596 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:51:41.901546 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:51:41.919598 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Jul 2 07:51:41.919598 systemd-fsck[1113]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 07:51:41.921011 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:51:41.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:41.924452 systemd[1]: Mounting boot.mount... Jul 2 07:51:41.935313 systemd[1]: Mounted boot.mount. Jul 2 07:51:42.156580 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:51:42.156925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:51:42.157513 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:51:42.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.160660 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:51:42.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.174561 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:51:42.178927 (sd-sysext)[1123]: Using extensions 'kubernetes'. Jul 2 07:51:42.179210 (sd-sysext)[1123]: Merged extensions into '/usr'. Jul 2 07:51:42.195032 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.196400 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:51:42.197669 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.198724 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:51:42.200526 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:51:42.202611 systemd[1]: Starting modprobe@loop.service... Jul 2 07:51:42.203789 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:51:42.203399 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.203499 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:42.203603 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.206177 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:51:42.207375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:51:42.207551 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:51:42.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.208861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:51:42.208985 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:51:42.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.210420 systemd[1]: Finished ldconfig.service. Jul 2 07:51:42.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.211793 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:51:42.212003 systemd[1]: Finished modprobe@loop.service. Jul 2 07:51:42.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.213679 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:51:42.213776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.215116 systemd[1]: Finished systemd-sysext.service. Jul 2 07:51:42.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.217463 systemd[1]: Starting ensure-sysext.service... Jul 2 07:51:42.219623 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:51:42.224359 systemd[1]: Reloading. Jul 2 07:51:42.228238 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:51:42.228852 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:51:42.230192 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:51:42.278122 /usr/lib/systemd/system-generators/torcx-generator[1158]: time="2024-07-02T07:51:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:51:42.278161 /usr/lib/systemd/system-generators/torcx-generator[1158]: time="2024-07-02T07:51:42Z" level=info msg="torcx already run" Jul 2 07:51:42.344374 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:51:42.344390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:51:42.360896 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:51:42.413981 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:51:42.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.417462 systemd[1]: Starting audit-rules.service... Jul 2 07:51:42.419204 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:51:42.421064 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:51:42.423319 systemd[1]: Starting systemd-resolved.service... Jul 2 07:51:42.425300 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:51:42.427440 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:51:42.429627 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:51:42.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.431000 audit[1219]: SYSTEM_BOOT pid=1219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.435957 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.436179 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.437257 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:51:42.439002 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:51:42.440858 systemd[1]: Starting modprobe@loop.service... Jul 2 07:51:42.441694 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.441892 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:42.442044 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:51:42.442153 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.443887 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:51:42.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:51:42.445509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:51:42.445701 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:51:42.445000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:51:42.445000 audit[1234]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd44158070 a2=420 a3=0 items=0 ppid=1207 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:51:42.446146 augenrules[1234]: No rules Jul 2 07:51:42.445000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:51:42.447150 systemd[1]: Finished audit-rules.service. Jul 2 07:51:42.448421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:51:42.448563 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:51:42.449988 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:51:42.450131 systemd[1]: Finished modprobe@loop.service. Jul 2 07:51:42.452297 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:51:42.455832 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.456049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.457743 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:51:42.459450 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:51:42.461461 systemd[1]: Starting modprobe@loop.service... Jul 2 07:51:42.462299 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.462403 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:42.463722 systemd[1]: Starting systemd-update-done.service... Jul 2 07:51:42.464772 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:51:42.464866 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.465955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:51:42.466103 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:51:42.467528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:51:42.467685 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:51:42.468950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:51:42.469156 systemd[1]: Finished modprobe@loop.service. Jul 2 07:51:42.470421 systemd[1]: Finished systemd-update-done.service. Jul 2 07:51:42.471712 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:51:42.471804 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.474378 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.474826 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.475757 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:51:42.477336 systemd[1]: Starting modprobe@drm.service... Jul 2 07:51:42.479028 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:51:42.480777 systemd[1]: Starting modprobe@loop.service... Jul 2 07:51:42.481792 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.481918 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:42.483612 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:51:42.484666 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:51:42.484784 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:51:42.486023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:51:42.486185 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:51:42.487569 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:51:42.487705 systemd[1]: Finished modprobe@drm.service. Jul 2 07:51:42.489050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:51:42.489220 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:51:42.490804 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:51:42.494062 systemd[1]: Finished modprobe@loop.service. Jul 2 07:51:42.495857 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:51:42.495983 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:51:42.497237 systemd[1]: Finished ensure-sysext.service. Jul 2 07:51:42.504365 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:51:42.505167 systemd-resolved[1212]: Positive Trust Anchors: Jul 2 07:51:42.505431 systemd-resolved[1212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:51:42.505469 systemd-resolved[1212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:51:42.505592 systemd[1]: Reached target time-set.target. Jul 2 07:51:43.795483 systemd-timesyncd[1214]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:51:43.795533 systemd-timesyncd[1214]: Initial clock synchronization to Tue 2024-07-02 07:51:43.795410 UTC. Jul 2 07:51:43.802538 systemd-resolved[1212]: Defaulting to hostname 'linux'. Jul 2 07:51:43.803846 systemd[1]: Started systemd-resolved.service. Jul 2 07:51:43.804748 systemd[1]: Reached target network.target. Jul 2 07:51:43.805599 systemd[1]: Reached target nss-lookup.target. Jul 2 07:51:43.806645 systemd[1]: Reached target sysinit.target. Jul 2 07:51:43.807874 systemd[1]: Started motdgen.path. Jul 2 07:51:43.808804 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:51:43.810458 systemd[1]: Started logrotate.timer. Jul 2 07:51:43.811899 systemd[1]: Started mdadm.timer. Jul 2 07:51:43.812831 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:51:43.814529 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:51:43.814560 systemd[1]: Reached target paths.target. Jul 2 07:51:43.815527 systemd[1]: Reached target timers.target. Jul 2 07:51:43.816833 systemd[1]: Listening on dbus.socket. Jul 2 07:51:43.818792 systemd[1]: Starting docker.socket... Jul 2 07:51:43.820372 systemd[1]: Listening on sshd.socket. Jul 2 07:51:43.821277 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:43.821520 systemd[1]: Listening on docker.socket. Jul 2 07:51:43.822377 systemd[1]: Reached target sockets.target. Jul 2 07:51:43.823255 systemd[1]: Reached target basic.target. Jul 2 07:51:43.824204 systemd[1]: System is tainted: cgroupsv1 Jul 2 07:51:43.824242 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:51:43.824259 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:51:43.825118 systemd[1]: Starting containerd.service... Jul 2 07:51:43.826790 systemd[1]: Starting dbus.service... Jul 2 07:51:43.828463 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:51:43.830423 systemd[1]: Starting extend-filesystems.service... Jul 2 07:51:43.831911 jq[1270]: false Jul 2 07:51:43.831378 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:51:43.832615 systemd[1]: Starting motdgen.service... Jul 2 07:51:43.834696 systemd[1]: Starting prepare-helm.service... Jul 2 07:51:43.837053 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:51:43.839154 systemd[1]: Starting sshd-keygen.service... Jul 2 07:51:43.842095 systemd[1]: Starting systemd-logind.service... Jul 2 07:51:43.843072 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:51:43.843136 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:51:43.844632 systemd[1]: Starting update-engine.service... Jul 2 07:51:43.848057 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:51:43.850546 jq[1290]: true Jul 2 07:51:43.850808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:51:43.852686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:51:43.853713 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:51:43.853926 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:51:43.858115 systemd[1]: Started dbus.service. Jul 2 07:51:43.856154 dbus-daemon[1269]: [system] SELinux support is enabled Jul 2 07:51:43.861444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:51:43.861477 systemd[1]: Reached target system-config.target. Jul 2 07:51:43.862695 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:51:43.862715 systemd[1]: Reached target user-config.target. Jul 2 07:51:43.864174 jq[1295]: true Jul 2 07:51:43.864915 tar[1294]: linux-amd64/helm Jul 2 07:51:43.867593 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:51:43.867786 systemd[1]: Finished motdgen.service. Jul 2 07:51:43.872583 extend-filesystems[1271]: Found loop1 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found sr0 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda1 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda2 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda3 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found usr Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda4 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda6 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda7 Jul 2 07:51:43.873575 extend-filesystems[1271]: Found vda9 Jul 2 07:51:43.873575 extend-filesystems[1271]: Checking size of /dev/vda9 Jul 2 07:51:43.900515 env[1298]: time="2024-07-02T07:51:43.885758801Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:51:43.909402 update_engine[1286]: I0702 07:51:43.909273 1286 main.cc:92] Flatcar Update Engine starting Jul 2 07:51:43.912074 systemd[1]: Started update-engine.service. Jul 2 07:51:43.916198 env[1298]: time="2024-07-02T07:51:43.914959653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:51:43.916249 update_engine[1286]: I0702 07:51:43.912139 1286 update_check_scheduler.cc:74] Next update check in 4m3s Jul 2 07:51:43.924250 systemd[1]: Started locksmithd.service. Jul 2 07:51:43.928103 systemd[1]: Created slice system-sshd.slice. Jul 2 07:51:43.939646 env[1298]: time="2024-07-02T07:51:43.930809538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.941219 systemd-logind[1283]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:51:43.941247 systemd-logind[1283]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:51:43.942463 env[1298]: time="2024-07-02T07:51:43.942421444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:51:43.942507 env[1298]: time="2024-07-02T07:51:43.942465166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.942735 env[1298]: time="2024-07-02T07:51:43.942708573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:51:43.942778 env[1298]: time="2024-07-02T07:51:43.942734702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.942778 env[1298]: time="2024-07-02T07:51:43.942749800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:51:43.942778 env[1298]: time="2024-07-02T07:51:43.942761001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.942863 env[1298]: time="2024-07-02T07:51:43.942841281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.943138 env[1298]: time="2024-07-02T07:51:43.943115416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:51:43.944190 systemd-logind[1283]: New seat seat0. Jul 2 07:51:43.951219 systemd[1]: Started systemd-logind.service. Jul 2 07:51:43.952096 env[1298]: time="2024-07-02T07:51:43.952060110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:51:43.952141 env[1298]: time="2024-07-02T07:51:43.952096959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:51:43.952196 env[1298]: time="2024-07-02T07:51:43.952173913Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:51:43.952196 env[1298]: time="2024-07-02T07:51:43.952192679Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:51:43.963581 extend-filesystems[1271]: Resized partition /dev/vda9 Jul 2 07:51:43.983213 extend-filesystems[1334]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:51:44.044066 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:51:44.099227 systemd-networkd[1072]: eth0: Gained IPv6LL Jul 2 07:51:44.101430 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:51:44.105768 systemd[1]: Reached target network-online.target. Jul 2 07:51:44.122002 systemd[1]: Starting kubelet.service... Jul 2 07:51:44.185150 locksmithd[1323]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:51:44.201071 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:51:44.221206 extend-filesystems[1334]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:51:44.221206 extend-filesystems[1334]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:51:44.221206 extend-filesystems[1334]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:51:44.232702 extend-filesystems[1271]: Resized filesystem in /dev/vda9 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231354677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231435679Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231451348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231505840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231526369Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231565342Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231581362Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231598214Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231698191Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231716726Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231731824Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231747514Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.231937901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:51:44.235799 env[1298]: time="2024-07-02T07:51:44.232074236Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:51:44.236238 bash[1327]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:51:44.223389 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232654815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232704288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232718154Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232766665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232779840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232791692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232802302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232813292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232824744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232834943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232845232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232858036Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.232991016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.233004100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.236409 env[1298]: time="2024-07-02T07:51:44.233014961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.225991 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:51:44.236885 env[1298]: time="2024-07-02T07:51:44.233024989Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:51:44.236885 env[1298]: time="2024-07-02T07:51:44.233055006Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:51:44.236885 env[1298]: time="2024-07-02T07:51:44.233067389Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:51:44.236885 env[1298]: time="2024-07-02T07:51:44.233084471Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:51:44.236885 env[1298]: time="2024-07-02T07:51:44.233119487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:51:44.226251 systemd[1]: Finished extend-filesystems.service. Jul 2 07:51:44.234164 systemd[1]: Started containerd.service. Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.233288624Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.233336143Z" level=info msg="Connect containerd service" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.233368494Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.233795064Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.233989628Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.234019394Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.234074187Z" level=info msg="containerd successfully booted in 0.348906s" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236348482Z" level=info msg="Start subscribing containerd event" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236385341Z" level=info msg="Start recovering state" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236430396Z" level=info msg="Start event monitor" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236452167Z" level=info msg="Start snapshots syncer" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236459681Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:51:44.237142 env[1298]: time="2024-07-02T07:51:44.236465772Z" level=info msg="Start streaming server" Jul 2 07:51:44.269119 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:51:44.286693 systemd[1]: Finished sshd-keygen.service. Jul 2 07:51:44.289376 systemd[1]: Starting issuegen.service... Jul 2 07:51:44.291195 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:56098.service. Jul 2 07:51:44.296305 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:51:44.296488 systemd[1]: Finished issuegen.service. Jul 2 07:51:44.298411 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:51:44.305736 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:51:44.307998 systemd[1]: Started getty@tty1.service. Jul 2 07:51:44.309880 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:51:44.310855 systemd[1]: Reached target getty.target. Jul 2 07:51:44.316251 tar[1294]: linux-amd64/LICENSE Jul 2 07:51:44.316344 tar[1294]: linux-amd64/README.md Jul 2 07:51:44.320149 systemd[1]: Finished prepare-helm.service. Jul 2 07:51:44.340206 sshd[1355]: Accepted publickey for core from 10.0.0.1 port 56098 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:44.341635 sshd[1355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:44.348726 systemd[1]: Created slice user-500.slice. Jul 2 07:51:44.350402 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:51:44.353079 systemd-logind[1283]: New session 1 of user core. Jul 2 07:51:44.358603 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:51:44.360848 systemd[1]: Starting user@500.service... Jul 2 07:51:44.364124 (systemd)[1370]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:44.428594 systemd[1370]: Queued start job for default target default.target. Jul 2 07:51:44.428800 systemd[1370]: Reached target paths.target. Jul 2 07:51:44.428816 systemd[1370]: Reached target sockets.target. Jul 2 07:51:44.428829 systemd[1370]: Reached target timers.target. Jul 2 07:51:44.428840 systemd[1370]: Reached target basic.target. Jul 2 07:51:44.428879 systemd[1370]: Reached target default.target. Jul 2 07:51:44.428899 systemd[1370]: Startup finished in 60ms. Jul 2 07:51:44.429249 systemd[1]: Started user@500.service. Jul 2 07:51:44.431193 systemd[1]: Started session-1.scope. Jul 2 07:51:44.486008 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:55048.service. Jul 2 07:51:44.525565 sshd[1379]: Accepted publickey for core from 10.0.0.1 port 55048 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:44.527019 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:44.530742 systemd-logind[1283]: New session 2 of user core. Jul 2 07:51:44.531433 systemd[1]: Started session-2.scope. Jul 2 07:51:44.586997 sshd[1379]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:44.589218 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:55054.service. Jul 2 07:51:44.591064 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:55048.service: Deactivated successfully. Jul 2 07:51:44.592117 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:51:44.592665 systemd-logind[1283]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:51:44.593943 systemd-logind[1283]: Removed session 2. Jul 2 07:51:44.626848 sshd[1384]: Accepted publickey for core from 10.0.0.1 port 55054 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:44.628060 sshd[1384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:44.631443 systemd-logind[1283]: New session 3 of user core. Jul 2 07:51:44.632138 systemd[1]: Started session-3.scope. Jul 2 07:51:44.687127 sshd[1384]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:44.689471 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:55054.service: Deactivated successfully. Jul 2 07:51:44.690497 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:51:44.690628 systemd-logind[1283]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:51:44.691703 systemd-logind[1283]: Removed session 3. Jul 2 07:51:44.794260 systemd[1]: Started kubelet.service. Jul 2 07:51:44.795507 systemd[1]: Reached target multi-user.target. Jul 2 07:51:44.797511 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:51:44.803896 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:51:44.804131 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:51:44.808140 systemd[1]: Startup finished in 5.389s (kernel) + 5.053s (userspace) = 10.443s. Jul 2 07:51:45.290953 kubelet[1398]: E0702 07:51:45.290865 1398 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:51:45.293015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:51:45.293159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:51:54.690413 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:38590.service. Jul 2 07:51:54.728286 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 38590 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:54.729238 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:54.732428 systemd-logind[1283]: New session 4 of user core. Jul 2 07:51:54.733189 systemd[1]: Started session-4.scope. Jul 2 07:51:54.785408 sshd[1409]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:54.787730 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:38602.service. Jul 2 07:51:54.788180 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:38590.service: Deactivated successfully. Jul 2 07:51:54.788993 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:51:54.789008 systemd-logind[1283]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:51:54.789895 systemd-logind[1283]: Removed session 4. Jul 2 07:51:54.825272 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 38602 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:54.826564 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:54.830196 systemd-logind[1283]: New session 5 of user core. Jul 2 07:51:54.831123 systemd[1]: Started session-5.scope. Jul 2 07:51:54.880353 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:54.883105 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:38606.service. Jul 2 07:51:54.883641 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:38602.service: Deactivated successfully. Jul 2 07:51:54.884524 systemd-logind[1283]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:51:54.884618 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:51:54.885549 systemd-logind[1283]: Removed session 5. Jul 2 07:51:54.921278 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 38606 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:54.922399 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:54.925628 systemd-logind[1283]: New session 6 of user core. Jul 2 07:51:54.926341 systemd[1]: Started session-6.scope. Jul 2 07:51:54.981197 sshd[1422]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:54.983241 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:38614.service. Jul 2 07:51:54.984129 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:38606.service: Deactivated successfully. Jul 2 07:51:54.985085 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:51:54.985972 systemd-logind[1283]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:51:54.986921 systemd-logind[1283]: Removed session 6. Jul 2 07:51:55.020159 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 38614 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:51:55.020950 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:51:55.024222 systemd-logind[1283]: New session 7 of user core. Jul 2 07:51:55.024986 systemd[1]: Started session-7.scope. Jul 2 07:51:55.078073 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:51:55.078261 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:51:55.097340 systemd[1]: Starting docker.service... Jul 2 07:51:55.130483 env[1445]: time="2024-07-02T07:51:55.130427499Z" level=info msg="Starting up" Jul 2 07:51:55.131848 env[1445]: time="2024-07-02T07:51:55.131811615Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:51:55.131848 env[1445]: time="2024-07-02T07:51:55.131838034Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:51:55.131923 env[1445]: time="2024-07-02T07:51:55.131861659Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:51:55.131923 env[1445]: time="2024-07-02T07:51:55.131870966Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:51:55.135879 env[1445]: time="2024-07-02T07:51:55.135830732Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:51:55.135879 env[1445]: time="2024-07-02T07:51:55.135863163Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:51:55.136048 env[1445]: time="2024-07-02T07:51:55.135887619Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:51:55.136048 env[1445]: time="2024-07-02T07:51:55.135903428Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:51:55.456775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:51:55.457109 systemd[1]: Stopped kubelet.service. Jul 2 07:51:55.458373 systemd[1]: Starting kubelet.service... Jul 2 07:51:55.759768 systemd[1]: Started kubelet.service. Jul 2 07:51:55.814161 kubelet[1463]: E0702 07:51:55.814102 1463 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:51:55.817225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:51:55.817357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:51:56.102751 env[1445]: time="2024-07-02T07:51:56.102595751Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 07:51:56.102751 env[1445]: time="2024-07-02T07:51:56.102620348Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 07:51:56.102751 env[1445]: time="2024-07-02T07:51:56.102738519Z" level=info msg="Loading containers: start." Jul 2 07:51:56.206065 kernel: Initializing XFRM netlink socket Jul 2 07:51:56.232304 env[1445]: time="2024-07-02T07:51:56.232264667Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:51:56.277782 systemd-networkd[1072]: docker0: Link UP Jul 2 07:51:56.286226 env[1445]: time="2024-07-02T07:51:56.286193353Z" level=info msg="Loading containers: done." Jul 2 07:51:56.297540 env[1445]: time="2024-07-02T07:51:56.297495278Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:51:56.297697 env[1445]: time="2024-07-02T07:51:56.297665016Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:51:56.297764 env[1445]: time="2024-07-02T07:51:56.297740427Z" level=info msg="Daemon has completed initialization" Jul 2 07:51:56.311852 systemd[1]: Started docker.service. Jul 2 07:51:56.317663 env[1445]: time="2024-07-02T07:51:56.317604630Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:51:56.918965 env[1298]: time="2024-07-02T07:51:56.918914921Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:51:58.227915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148166336.mount: Deactivated successfully. Jul 2 07:51:59.943955 env[1298]: time="2024-07-02T07:51:59.943896091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:59.946132 env[1298]: time="2024-07-02T07:51:59.946081199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:59.947607 env[1298]: time="2024-07-02T07:51:59.947583927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:59.949259 env[1298]: time="2024-07-02T07:51:59.949211419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:59.949829 env[1298]: time="2024-07-02T07:51:59.949795444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:51:59.958688 env[1298]: time="2024-07-02T07:51:59.958661912Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:52:02.459683 env[1298]: time="2024-07-02T07:52:02.459611635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:02.517674 env[1298]: time="2024-07-02T07:52:02.517649627Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:02.554154 env[1298]: time="2024-07-02T07:52:02.554117417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:02.596909 env[1298]: time="2024-07-02T07:52:02.596868862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:02.597608 env[1298]: time="2024-07-02T07:52:02.597563424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:52:02.609134 env[1298]: time="2024-07-02T07:52:02.609083639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:52:04.351610 env[1298]: time="2024-07-02T07:52:04.351556808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:04.353555 env[1298]: time="2024-07-02T07:52:04.353507977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:04.355086 env[1298]: time="2024-07-02T07:52:04.355064346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:04.356726 env[1298]: time="2024-07-02T07:52:04.356689083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:04.357357 env[1298]: time="2024-07-02T07:52:04.357323603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:52:04.365775 env[1298]: time="2024-07-02T07:52:04.365739806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:52:05.728618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383041912.mount: Deactivated successfully. Jul 2 07:52:05.956757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:52:05.956946 systemd[1]: Stopped kubelet.service. Jul 2 07:52:05.958243 systemd[1]: Starting kubelet.service... Jul 2 07:52:06.030766 systemd[1]: Started kubelet.service. Jul 2 07:52:06.072782 kubelet[1627]: E0702 07:52:06.072713 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:52:06.075122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:52:06.075283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:52:07.760521 env[1298]: time="2024-07-02T07:52:07.760455650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:07.859524 env[1298]: time="2024-07-02T07:52:07.859481519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:07.901162 env[1298]: time="2024-07-02T07:52:07.901122070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:07.923792 env[1298]: time="2024-07-02T07:52:07.923753191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:07.924051 env[1298]: time="2024-07-02T07:52:07.924007648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:52:07.931700 env[1298]: time="2024-07-02T07:52:07.931664197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:52:08.821267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585257038.mount: Deactivated successfully. Jul 2 07:52:08.825937 env[1298]: time="2024-07-02T07:52:08.825902684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:08.827455 env[1298]: time="2024-07-02T07:52:08.827428936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:08.828739 env[1298]: time="2024-07-02T07:52:08.828687125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:08.829899 env[1298]: time="2024-07-02T07:52:08.829871246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:08.830364 env[1298]: time="2024-07-02T07:52:08.830333573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:52:08.838094 env[1298]: time="2024-07-02T07:52:08.838063479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:52:09.374132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377137826.mount: Deactivated successfully. Jul 2 07:52:12.846420 env[1298]: time="2024-07-02T07:52:12.846355728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:12.964506 env[1298]: time="2024-07-02T07:52:12.964445859Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:13.020655 env[1298]: time="2024-07-02T07:52:13.020601351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:13.072593 env[1298]: time="2024-07-02T07:52:13.072546547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:13.073450 env[1298]: time="2024-07-02T07:52:13.073426337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:52:13.082295 env[1298]: time="2024-07-02T07:52:13.082252589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:52:14.600069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231714335.mount: Deactivated successfully. Jul 2 07:52:16.206727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:52:16.206903 systemd[1]: Stopped kubelet.service. Jul 2 07:52:16.208189 systemd[1]: Starting kubelet.service... Jul 2 07:52:16.275910 systemd[1]: Started kubelet.service. Jul 2 07:52:16.525569 kubelet[1665]: E0702 07:52:16.525443 1665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:52:16.527569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:52:16.527703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:52:16.781955 env[1298]: time="2024-07-02T07:52:16.781824886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:16.792264 env[1298]: time="2024-07-02T07:52:16.792219812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:16.801508 env[1298]: time="2024-07-02T07:52:16.801454779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:16.803633 env[1298]: time="2024-07-02T07:52:16.803595512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:16.804112 env[1298]: time="2024-07-02T07:52:16.804077972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:52:19.193610 systemd[1]: Stopped kubelet.service. Jul 2 07:52:19.195586 systemd[1]: Starting kubelet.service... Jul 2 07:52:19.208511 systemd[1]: Reloading. Jul 2 07:52:19.271120 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2024-07-02T07:52:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:52:19.271143 /usr/lib/systemd/system-generators/torcx-generator[1773]: time="2024-07-02T07:52:19Z" level=info msg="torcx already run" Jul 2 07:52:19.522826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:52:19.522842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:52:19.539318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:52:19.608249 systemd[1]: Started kubelet.service. Jul 2 07:52:19.609540 systemd[1]: Stopping kubelet.service... Jul 2 07:52:19.609821 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:52:19.610042 systemd[1]: Stopped kubelet.service. Jul 2 07:52:19.611322 systemd[1]: Starting kubelet.service... Jul 2 07:52:19.682505 systemd[1]: Started kubelet.service. Jul 2 07:52:19.721423 kubelet[1836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:52:19.721423 kubelet[1836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:52:19.721423 kubelet[1836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:52:19.721800 kubelet[1836]: I0702 07:52:19.721470 1836 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:52:19.927242 kubelet[1836]: I0702 07:52:19.927136 1836 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:52:19.927242 kubelet[1836]: I0702 07:52:19.927168 1836 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:52:19.927697 kubelet[1836]: I0702 07:52:19.927658 1836 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:52:19.940827 kubelet[1836]: E0702 07:52:19.940800 1836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.941294 kubelet[1836]: I0702 07:52:19.941254 1836 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:52:19.953382 kubelet[1836]: I0702 07:52:19.953358 1836 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:52:19.955012 kubelet[1836]: I0702 07:52:19.954991 1836 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:52:19.955221 kubelet[1836]: I0702 07:52:19.955200 1836 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:52:19.955221 kubelet[1836]: I0702 07:52:19.955219 1836 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:52:19.955326 kubelet[1836]: I0702 07:52:19.955226 1836 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:52:19.955725 kubelet[1836]: I0702 07:52:19.955706 1836 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:52:19.956746 kubelet[1836]: I0702 07:52:19.956728 1836 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:52:19.956746 kubelet[1836]: I0702 07:52:19.956746 1836 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:52:19.956798 kubelet[1836]: I0702 07:52:19.956767 1836 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:52:19.956798 kubelet[1836]: I0702 07:52:19.956782 1836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:52:19.960445 kubelet[1836]: I0702 07:52:19.960420 1836 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:52:19.963680 kubelet[1836]: W0702 07:52:19.963635 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.963792 kubelet[1836]: E0702 07:52:19.963693 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.970088 kubelet[1836]: W0702 07:52:19.970054 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.970139 kubelet[1836]: E0702 07:52:19.970090 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.971344 kubelet[1836]: W0702 07:52:19.971323 1836 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:52:19.972735 kubelet[1836]: I0702 07:52:19.971757 1836 server.go:1232] "Started kubelet" Jul 2 07:52:19.972735 kubelet[1836]: I0702 07:52:19.972117 1836 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:52:19.972735 kubelet[1836]: I0702 07:52:19.972401 1836 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:52:19.972735 kubelet[1836]: I0702 07:52:19.972437 1836 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:52:19.973289 kubelet[1836]: I0702 07:52:19.973266 1836 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:52:19.974422 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:52:19.982436 kubelet[1836]: E0702 07:52:19.982419 1836 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:52:19.982531 kubelet[1836]: E0702 07:52:19.982516 1836 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:52:19.982899 kubelet[1836]: I0702 07:52:19.982887 1836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:52:19.983521 kubelet[1836]: E0702 07:52:19.982715 1836 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de56114ad0adf1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 52, 19, 971730929, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 52, 19, 971730929, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.125:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.125:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:52:19.984440 kubelet[1836]: I0702 07:52:19.984243 1836 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:52:19.984440 kubelet[1836]: I0702 07:52:19.984353 1836 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:52:19.984440 kubelet[1836]: I0702 07:52:19.984394 1836 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:52:19.984734 kubelet[1836]: W0702 07:52:19.984639 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.984734 kubelet[1836]: E0702 07:52:19.984683 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:19.986026 kubelet[1836]: E0702 07:52:19.985996 1836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Jul 2 07:52:19.997702 kubelet[1836]: I0702 07:52:19.997678 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:52:19.999776 kubelet[1836]: I0702 07:52:19.999757 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:52:19.999855 kubelet[1836]: I0702 07:52:19.999840 1836 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:52:19.999959 kubelet[1836]: I0702 07:52:19.999945 1836 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:52:20.000198 kubelet[1836]: E0702 07:52:20.000159 1836 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:52:20.001243 kubelet[1836]: W0702 07:52:20.001209 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:20.001302 kubelet[1836]: E0702 07:52:20.001249 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:20.013398 kubelet[1836]: I0702 07:52:20.013380 1836 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:52:20.013398 kubelet[1836]: I0702 07:52:20.013395 1836 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:52:20.013472 kubelet[1836]: I0702 07:52:20.013410 1836 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:52:20.085630 kubelet[1836]: I0702 07:52:20.085603 1836 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:20.085917 kubelet[1836]: E0702 07:52:20.085890 1836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jul 2 07:52:20.100983 kubelet[1836]: E0702 07:52:20.100957 1836 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:52:20.186626 kubelet[1836]: E0702 07:52:20.186552 1836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Jul 2 07:52:20.287729 kubelet[1836]: I0702 07:52:20.287707 1836 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:20.287930 kubelet[1836]: E0702 07:52:20.287915 1836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jul 2 07:52:20.302091 kubelet[1836]: E0702 07:52:20.302056 1836 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:52:20.431336 kubelet[1836]: I0702 07:52:20.431303 1836 policy_none.go:49] "None policy: Start" Jul 2 07:52:20.431855 kubelet[1836]: I0702 07:52:20.431839 1836 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:52:20.431855 kubelet[1836]: I0702 07:52:20.431858 1836 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:52:20.438550 kubelet[1836]: I0702 07:52:20.438494 1836 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:52:20.438674 kubelet[1836]: I0702 07:52:20.438654 1836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:52:20.439135 kubelet[1836]: E0702 07:52:20.439101 1836 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:52:20.587997 kubelet[1836]: E0702 07:52:20.587963 1836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Jul 2 07:52:20.689287 kubelet[1836]: I0702 07:52:20.689097 1836 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:20.689490 kubelet[1836]: E0702 07:52:20.689467 1836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jul 2 07:52:20.702720 kubelet[1836]: I0702 07:52:20.702691 1836 topology_manager.go:215] "Topology Admit Handler" podUID="b32486e2011d762e275cece35eceac56" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:52:20.703314 kubelet[1836]: I0702 07:52:20.703292 1836 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:52:20.703810 kubelet[1836]: I0702 07:52:20.703793 1836 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:52:20.788545 kubelet[1836]: I0702 07:52:20.788521 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:20.788545 kubelet[1836]: I0702 07:52:20.788553 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:52:20.788832 kubelet[1836]: I0702 07:52:20.788579 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:20.788832 kubelet[1836]: I0702 07:52:20.788597 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:20.788832 kubelet[1836]: I0702 07:52:20.788677 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:20.788832 kubelet[1836]: I0702 07:52:20.788704 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:20.788832 kubelet[1836]: I0702 07:52:20.788736 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:20.788950 kubelet[1836]: I0702 07:52:20.788762 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:20.788950 kubelet[1836]: I0702 07:52:20.788792 1836 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:20.909338 kubelet[1836]: W0702 07:52:20.909295 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:20.909338 kubelet[1836]: E0702 07:52:20.909338 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.009374 kubelet[1836]: E0702 07:52:21.009329 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:21.009374 kubelet[1836]: E0702 07:52:21.009328 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:21.009576 kubelet[1836]: E0702 07:52:21.009474 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:21.010060 env[1298]: time="2024-07-02T07:52:21.009816478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:21.010060 env[1298]: time="2024-07-02T07:52:21.009905488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b32486e2011d762e275cece35eceac56,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:21.010060 env[1298]: time="2024-07-02T07:52:21.009972607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:21.197141 kubelet[1836]: W0702 07:52:21.197085 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.197141 kubelet[1836]: E0702 07:52:21.197139 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.377942 kubelet[1836]: W0702 07:52:21.377816 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.377942 kubelet[1836]: E0702 07:52:21.377873 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.389275 kubelet[1836]: E0702 07:52:21.389244 1836 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="1.6s" Jul 2 07:52:21.399676 kubelet[1836]: W0702 07:52:21.399631 1836 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.399676 kubelet[1836]: E0702 07:52:21.399669 1836 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:21.490929 kubelet[1836]: I0702 07:52:21.490910 1836 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:21.491151 kubelet[1836]: E0702 07:52:21.491125 1836 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jul 2 07:52:22.027290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528814399.mount: Deactivated successfully. Jul 2 07:52:22.030669 env[1298]: time="2024-07-02T07:52:22.030632319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.034204 env[1298]: time="2024-07-02T07:52:22.034182660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.035105 env[1298]: time="2024-07-02T07:52:22.035078152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.036844 env[1298]: time="2024-07-02T07:52:22.036800557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.038487 env[1298]: time="2024-07-02T07:52:22.038437247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.039565 env[1298]: time="2024-07-02T07:52:22.039534957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.040816 env[1298]: time="2024-07-02T07:52:22.040788946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.041934 env[1298]: time="2024-07-02T07:52:22.041907195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.043272 env[1298]: time="2024-07-02T07:52:22.043246005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.044889 env[1298]: time="2024-07-02T07:52:22.044869451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.046141 env[1298]: time="2024-07-02T07:52:22.046114401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.046837 env[1298]: time="2024-07-02T07:52:22.046815904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:22.053666 kubelet[1836]: E0702 07:52:22.053637 1836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.125:6443: connect: connection refused Jul 2 07:52:22.061716 env[1298]: time="2024-07-02T07:52:22.061649005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:22.061849 env[1298]: time="2024-07-02T07:52:22.061728147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:22.061849 env[1298]: time="2024-07-02T07:52:22.061753786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:22.062049 env[1298]: time="2024-07-02T07:52:22.061984176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f9ad5179ffd0adb77aac43511343a8d00079c8360dff7216be468ff24b59ef1 pid=1879 runtime=io.containerd.runc.v2 Jul 2 07:52:22.083945 env[1298]: time="2024-07-02T07:52:22.083880628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:22.083945 env[1298]: time="2024-07-02T07:52:22.083944189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:22.084162 env[1298]: time="2024-07-02T07:52:22.083964047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:22.084296 env[1298]: time="2024-07-02T07:52:22.084244444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d07536578a6c6fb0a88e64333a1a3610b736c4a037b327723835a8012c960e28 pid=1916 runtime=io.containerd.runc.v2 Jul 2 07:52:22.086210 env[1298]: time="2024-07-02T07:52:22.086159837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:22.086397 env[1298]: time="2024-07-02T07:52:22.086360652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:22.086504 env[1298]: time="2024-07-02T07:52:22.086384097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:22.086710 env[1298]: time="2024-07-02T07:52:22.086679762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d14f958f1ba6ff0b0e50db15b5dee951abc8325c1767b419581d4893a804991b pid=1915 runtime=io.containerd.runc.v2 Jul 2 07:52:22.115555 env[1298]: time="2024-07-02T07:52:22.115506299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f9ad5179ffd0adb77aac43511343a8d00079c8360dff7216be468ff24b59ef1\"" Jul 2 07:52:22.116389 kubelet[1836]: E0702 07:52:22.116368 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:22.118391 env[1298]: time="2024-07-02T07:52:22.118365518Z" level=info msg="CreateContainer within sandbox \"6f9ad5179ffd0adb77aac43511343a8d00079c8360dff7216be468ff24b59ef1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:52:22.132845 env[1298]: time="2024-07-02T07:52:22.132790799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07536578a6c6fb0a88e64333a1a3610b736c4a037b327723835a8012c960e28\"" Jul 2 07:52:22.133469 kubelet[1836]: E0702 07:52:22.133429 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:22.135370 env[1298]: time="2024-07-02T07:52:22.135337610Z" level=info msg="CreateContainer within sandbox \"d07536578a6c6fb0a88e64333a1a3610b736c4a037b327723835a8012c960e28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:52:22.137725 env[1298]: time="2024-07-02T07:52:22.137685261Z" level=info msg="CreateContainer within sandbox \"6f9ad5179ffd0adb77aac43511343a8d00079c8360dff7216be468ff24b59ef1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09da326cf0eb10f3bff1c1b0af007febc1a59c7c8ce973d1ff7918e4a7c28495\"" Jul 2 07:52:22.138065 env[1298]: time="2024-07-02T07:52:22.138045510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b32486e2011d762e275cece35eceac56,Namespace:kube-system,Attempt:0,} returns sandbox id \"d14f958f1ba6ff0b0e50db15b5dee951abc8325c1767b419581d4893a804991b\"" Jul 2 07:52:22.138511 env[1298]: time="2024-07-02T07:52:22.138461486Z" level=info msg="StartContainer for \"09da326cf0eb10f3bff1c1b0af007febc1a59c7c8ce973d1ff7918e4a7c28495\"" Jul 2 07:52:22.138588 kubelet[1836]: E0702 07:52:22.138562 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:22.140360 env[1298]: time="2024-07-02T07:52:22.140333326Z" level=info msg="CreateContainer within sandbox \"d14f958f1ba6ff0b0e50db15b5dee951abc8325c1767b419581d4893a804991b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:52:22.149980 env[1298]: time="2024-07-02T07:52:22.149943979Z" level=info msg="CreateContainer within sandbox \"d07536578a6c6fb0a88e64333a1a3610b736c4a037b327723835a8012c960e28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23afa9ad0092e056623ed9736f5791f88e6c400e5a94e6c1f698912e19761122\"" Jul 2 07:52:22.150457 env[1298]: time="2024-07-02T07:52:22.150419929Z" level=info msg="StartContainer for \"23afa9ad0092e056623ed9736f5791f88e6c400e5a94e6c1f698912e19761122\"" Jul 2 07:52:22.160095 env[1298]: time="2024-07-02T07:52:22.160014070Z" level=info msg="CreateContainer within sandbox \"d14f958f1ba6ff0b0e50db15b5dee951abc8325c1767b419581d4893a804991b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb22ea7c410d21a8283e83bdcf8653ea2abe3808b91811617d0a7320b5c3413e\"" Jul 2 07:52:22.162258 env[1298]: time="2024-07-02T07:52:22.162227634Z" level=info msg="StartContainer for \"bb22ea7c410d21a8283e83bdcf8653ea2abe3808b91811617d0a7320b5c3413e\"" Jul 2 07:52:22.198992 env[1298]: time="2024-07-02T07:52:22.198950534Z" level=info msg="StartContainer for \"09da326cf0eb10f3bff1c1b0af007febc1a59c7c8ce973d1ff7918e4a7c28495\" returns successfully" Jul 2 07:52:22.219264 env[1298]: time="2024-07-02T07:52:22.217459156Z" level=info msg="StartContainer for \"23afa9ad0092e056623ed9736f5791f88e6c400e5a94e6c1f698912e19761122\" returns successfully" Jul 2 07:52:22.242021 env[1298]: time="2024-07-02T07:52:22.241961642Z" level=info msg="StartContainer for \"bb22ea7c410d21a8283e83bdcf8653ea2abe3808b91811617d0a7320b5c3413e\" returns successfully" Jul 2 07:52:23.008806 kubelet[1836]: E0702 07:52:23.008773 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:23.010790 kubelet[1836]: E0702 07:52:23.010771 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:23.012625 kubelet[1836]: E0702 07:52:23.012606 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:23.092426 kubelet[1836]: I0702 07:52:23.092396 1836 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:23.604574 kubelet[1836]: E0702 07:52:23.604523 1836 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:52:23.711668 kubelet[1836]: I0702 07:52:23.711632 1836 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:52:23.961807 kubelet[1836]: I0702 07:52:23.961785 1836 apiserver.go:52] "Watching apiserver" Jul 2 07:52:23.984552 kubelet[1836]: I0702 07:52:23.984520 1836 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:52:24.115186 kubelet[1836]: E0702 07:52:24.115159 1836 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:24.115579 kubelet[1836]: E0702 07:52:24.115533 1836 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:24.115579 kubelet[1836]: E0702 07:52:24.115565 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:24.116061 kubelet[1836]: E0702 07:52:24.115939 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:25.017286 kubelet[1836]: E0702 07:52:25.017224 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:26.015440 kubelet[1836]: E0702 07:52:26.015408 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:26.257319 systemd[1]: Reloading. Jul 2 07:52:26.321940 /usr/lib/systemd/system-generators/torcx-generator[2141]: time="2024-07-02T07:52:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:52:26.321964 /usr/lib/systemd/system-generators/torcx-generator[2141]: time="2024-07-02T07:52:26Z" level=info msg="torcx already run" Jul 2 07:52:26.380200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:52:26.380214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:52:26.397202 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:52:26.472623 kubelet[1836]: I0702 07:52:26.472591 1836 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:52:26.472667 systemd[1]: Stopping kubelet.service... Jul 2 07:52:26.491403 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:52:26.491677 systemd[1]: Stopped kubelet.service. Jul 2 07:52:26.493145 systemd[1]: Starting kubelet.service... Jul 2 07:52:26.567202 systemd[1]: Started kubelet.service. Jul 2 07:52:26.612896 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:52:26.612896 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:52:26.612896 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:52:26.612896 kubelet[2196]: I0702 07:52:26.612860 2196 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:52:26.616527 kubelet[2196]: I0702 07:52:26.616491 2196 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:52:26.616527 kubelet[2196]: I0702 07:52:26.616521 2196 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:52:26.616749 kubelet[2196]: I0702 07:52:26.616729 2196 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:52:26.618039 kubelet[2196]: I0702 07:52:26.618010 2196 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:52:26.618907 kubelet[2196]: I0702 07:52:26.618869 2196 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:52:26.625442 kubelet[2196]: I0702 07:52:26.625419 2196 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:52:26.625763 kubelet[2196]: I0702 07:52:26.625738 2196 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:52:26.625903 kubelet[2196]: I0702 07:52:26.625880 2196 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:52:26.625903 kubelet[2196]: I0702 07:52:26.625897 2196 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:52:26.625903 kubelet[2196]: I0702 07:52:26.625905 2196 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:52:26.626093 kubelet[2196]: I0702 07:52:26.625943 2196 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:52:26.626093 kubelet[2196]: I0702 07:52:26.626015 2196 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:52:26.626093 kubelet[2196]: I0702 07:52:26.626025 2196 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:52:26.626093 kubelet[2196]: I0702 07:52:26.626061 2196 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:52:26.626093 kubelet[2196]: I0702 07:52:26.626074 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:52:26.629010 kubelet[2196]: I0702 07:52:26.628977 2196 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:52:26.629721 kubelet[2196]: I0702 07:52:26.629699 2196 server.go:1232] "Started kubelet" Jul 2 07:52:26.632518 kubelet[2196]: I0702 07:52:26.632502 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:52:26.636811 kubelet[2196]: E0702 07:52:26.636160 2196 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:52:26.636811 kubelet[2196]: E0702 07:52:26.636189 2196 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:52:26.642199 kubelet[2196]: I0702 07:52:26.640451 2196 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:52:26.642199 kubelet[2196]: I0702 07:52:26.640534 2196 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:52:26.642199 kubelet[2196]: I0702 07:52:26.640551 2196 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:52:26.642199 kubelet[2196]: I0702 07:52:26.640629 2196 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:52:26.642199 kubelet[2196]: I0702 07:52:26.641200 2196 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:52:26.641884 sudo[2215]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:52:26.642080 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:52:26.644666 kubelet[2196]: I0702 07:52:26.644646 2196 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:52:26.646824 kubelet[2196]: I0702 07:52:26.646796 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:52:26.647559 kubelet[2196]: I0702 07:52:26.647545 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:52:26.647643 kubelet[2196]: I0702 07:52:26.647629 2196 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:52:26.647731 kubelet[2196]: I0702 07:52:26.647717 2196 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:52:26.647860 kubelet[2196]: E0702 07:52:26.647848 2196 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:52:26.655142 kubelet[2196]: I0702 07:52:26.654937 2196 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:52:26.708151 kubelet[2196]: I0702 07:52:26.708127 2196 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:52:26.708338 kubelet[2196]: I0702 07:52:26.708325 2196 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:52:26.708435 kubelet[2196]: I0702 07:52:26.708422 2196 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:52:26.708662 kubelet[2196]: I0702 07:52:26.708652 2196 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:52:26.708765 kubelet[2196]: I0702 07:52:26.708753 2196 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:52:26.708853 kubelet[2196]: I0702 07:52:26.708840 2196 policy_none.go:49] "None policy: Start" Jul 2 07:52:26.709523 kubelet[2196]: I0702 07:52:26.709511 2196 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:52:26.709620 kubelet[2196]: I0702 07:52:26.709608 2196 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:52:26.709860 kubelet[2196]: I0702 07:52:26.709850 2196 state_mem.go:75] "Updated machine memory state" Jul 2 07:52:26.710891 kubelet[2196]: I0702 07:52:26.710879 2196 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:52:26.711161 kubelet[2196]: I0702 07:52:26.711151 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:52:26.743928 kubelet[2196]: I0702 07:52:26.743911 2196 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:52:26.748131 kubelet[2196]: I0702 07:52:26.748118 2196 topology_manager.go:215] "Topology Admit Handler" podUID="b32486e2011d762e275cece35eceac56" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:52:26.750704 kubelet[2196]: I0702 07:52:26.749170 2196 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:52:26.750704 kubelet[2196]: I0702 07:52:26.749296 2196 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:52:26.754377 kubelet[2196]: E0702 07:52:26.754333 2196 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:26.754694 kubelet[2196]: I0702 07:52:26.754529 2196 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 07:52:26.754694 kubelet[2196]: I0702 07:52:26.754579 2196 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:52:26.942411 kubelet[2196]: I0702 07:52:26.942309 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:26.942411 kubelet[2196]: I0702 07:52:26.942345 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:26.942411 kubelet[2196]: I0702 07:52:26.942365 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b32486e2011d762e275cece35eceac56-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b32486e2011d762e275cece35eceac56\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:26.942411 kubelet[2196]: I0702 07:52:26.942381 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:26.942411 kubelet[2196]: I0702 07:52:26.942397 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:26.942663 kubelet[2196]: I0702 07:52:26.942414 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:26.942663 kubelet[2196]: I0702 07:52:26.942430 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:26.942663 kubelet[2196]: I0702 07:52:26.942449 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:52:26.942663 kubelet[2196]: I0702 07:52:26.942466 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:52:27.055890 kubelet[2196]: E0702 07:52:27.055847 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.056084 kubelet[2196]: E0702 07:52:27.056063 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.056425 kubelet[2196]: E0702 07:52:27.056404 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.100793 sudo[2215]: pam_unix(sudo:session): session closed for user root Jul 2 07:52:27.626973 kubelet[2196]: I0702 07:52:27.626928 2196 apiserver.go:52] "Watching apiserver" Jul 2 07:52:27.641561 kubelet[2196]: I0702 07:52:27.641530 2196 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:52:27.661361 kubelet[2196]: E0702 07:52:27.661347 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.787399 kubelet[2196]: E0702 07:52:27.787359 2196 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 07:52:27.787687 kubelet[2196]: E0702 07:52:27.787662 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.788049 kubelet[2196]: E0702 07:52:27.788011 2196 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:52:27.788859 kubelet[2196]: E0702 07:52:27.788835 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:27.805318 kubelet[2196]: I0702 07:52:27.805292 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.805242668 podCreationTimestamp="2024-07-02 07:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:27.78688298 +0000 UTC m=+1.213682981" watchObservedRunningTime="2024-07-02 07:52:27.805242668 +0000 UTC m=+1.232042669" Jul 2 07:52:27.872654 kubelet[2196]: I0702 07:52:27.872623 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.872565394 podCreationTimestamp="2024-07-02 07:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:27.872134153 +0000 UTC m=+1.298934154" watchObservedRunningTime="2024-07-02 07:52:27.872565394 +0000 UTC m=+1.299365385" Jul 2 07:52:27.872788 kubelet[2196]: I0702 07:52:27.872768 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.872753722 podCreationTimestamp="2024-07-02 07:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:27.805486421 +0000 UTC m=+1.232286412" watchObservedRunningTime="2024-07-02 07:52:27.872753722 +0000 UTC m=+1.299553723" Jul 2 07:52:28.333931 sudo[1434]: pam_unix(sudo:session): session closed for user root Jul 2 07:52:28.335087 sshd[1428]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:28.337405 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:38614.service: Deactivated successfully. Jul 2 07:52:28.338442 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:52:28.338919 systemd-logind[1283]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:52:28.339643 systemd-logind[1283]: Removed session 7. Jul 2 07:52:28.662195 kubelet[2196]: E0702 07:52:28.662095 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:28.662195 kubelet[2196]: E0702 07:52:28.662159 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:28.745106 kubelet[2196]: E0702 07:52:28.745081 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:29.366198 update_engine[1286]: I0702 07:52:29.366156 1286 update_attempter.cc:509] Updating boot flags... Jul 2 07:52:29.663354 kubelet[2196]: E0702 07:52:29.663161 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:35.010379 kubelet[2196]: E0702 07:52:35.010338 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:35.671283 kubelet[2196]: E0702 07:52:35.671255 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:38.750065 kubelet[2196]: E0702 07:52:38.750009 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:39.218335 kubelet[2196]: I0702 07:52:39.218293 2196 topology_manager.go:215] "Topology Admit Handler" podUID="adcf93c0-850a-4d3d-a10a-dfe587e5814e" podNamespace="kube-system" podName="kube-proxy-cb9mh" Jul 2 07:52:39.222289 kubelet[2196]: I0702 07:52:39.222243 2196 topology_manager.go:215] "Topology Admit Handler" podUID="f4168adb-880d-4d4c-95e3-86d844793795" podNamespace="kube-system" podName="cilium-dzsdw" Jul 2 07:52:39.263679 kubelet[2196]: E0702 07:52:39.263634 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:39.289000 kubelet[2196]: I0702 07:52:39.288973 2196 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:52:39.289370 env[1298]: time="2024-07-02T07:52:39.289324440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:52:39.289618 kubelet[2196]: I0702 07:52:39.289473 2196 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:52:39.327677 kubelet[2196]: I0702 07:52:39.327650 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-hubble-tls\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327790 kubelet[2196]: I0702 07:52:39.327687 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-lib-modules\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327790 kubelet[2196]: I0702 07:52:39.327705 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-net\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327790 kubelet[2196]: I0702 07:52:39.327725 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflwt\" (UniqueName: \"kubernetes.io/projected/adcf93c0-850a-4d3d-a10a-dfe587e5814e-kube-api-access-bflwt\") pod \"kube-proxy-cb9mh\" (UID: \"adcf93c0-850a-4d3d-a10a-dfe587e5814e\") " pod="kube-system/kube-proxy-cb9mh" Jul 2 07:52:39.327790 kubelet[2196]: I0702 07:52:39.327742 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adcf93c0-850a-4d3d-a10a-dfe587e5814e-kube-proxy\") pod \"kube-proxy-cb9mh\" (UID: \"adcf93c0-850a-4d3d-a10a-dfe587e5814e\") " pod="kube-system/kube-proxy-cb9mh" Jul 2 07:52:39.327790 kubelet[2196]: I0702 07:52:39.327763 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-etc-cni-netd\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327902 kubelet[2196]: I0702 07:52:39.327782 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4168adb-880d-4d4c-95e3-86d844793795-clustermesh-secrets\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327902 kubelet[2196]: I0702 07:52:39.327823 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-kernel\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327902 kubelet[2196]: I0702 07:52:39.327879 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-hostproc\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327902 kubelet[2196]: I0702 07:52:39.327903 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-bpf-maps\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327990 kubelet[2196]: I0702 07:52:39.327923 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-cgroup\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327990 kubelet[2196]: I0702 07:52:39.327949 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cni-path\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327990 kubelet[2196]: I0702 07:52:39.327974 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4168adb-880d-4d4c-95e3-86d844793795-cilium-config-path\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.327990 kubelet[2196]: I0702 07:52:39.327991 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmz4d\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.328099 kubelet[2196]: I0702 07:52:39.328015 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adcf93c0-850a-4d3d-a10a-dfe587e5814e-lib-modules\") pod \"kube-proxy-cb9mh\" (UID: \"adcf93c0-850a-4d3d-a10a-dfe587e5814e\") " pod="kube-system/kube-proxy-cb9mh" Jul 2 07:52:39.328099 kubelet[2196]: I0702 07:52:39.328059 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-xtables-lock\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.328099 kubelet[2196]: I0702 07:52:39.328097 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adcf93c0-850a-4d3d-a10a-dfe587e5814e-xtables-lock\") pod \"kube-proxy-cb9mh\" (UID: \"adcf93c0-850a-4d3d-a10a-dfe587e5814e\") " pod="kube-system/kube-proxy-cb9mh" Jul 2 07:52:39.328178 kubelet[2196]: I0702 07:52:39.328118 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-run\") pod \"cilium-dzsdw\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " pod="kube-system/cilium-dzsdw" Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439106 2196 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439158 2196 projected.go:198] Error preparing data for projected volume kube-api-access-bflwt for pod kube-system/kube-proxy-cb9mh: configmap "kube-root-ca.crt" not found Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439240 2196 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adcf93c0-850a-4d3d-a10a-dfe587e5814e-kube-api-access-bflwt podName:adcf93c0-850a-4d3d-a10a-dfe587e5814e nodeName:}" failed. No retries permitted until 2024-07-02 07:52:39.939209257 +0000 UTC m=+13.366009258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bflwt" (UniqueName: "kubernetes.io/projected/adcf93c0-850a-4d3d-a10a-dfe587e5814e-kube-api-access-bflwt") pod "kube-proxy-cb9mh" (UID: "adcf93c0-850a-4d3d-a10a-dfe587e5814e") : configmap "kube-root-ca.crt" not found Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439578 2196 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439594 2196 projected.go:198] Error preparing data for projected volume kube-api-access-lmz4d for pod kube-system/cilium-dzsdw: configmap "kube-root-ca.crt" not found Jul 2 07:52:39.439956 kubelet[2196]: E0702 07:52:39.439624 2196 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d podName:f4168adb-880d-4d4c-95e3-86d844793795 nodeName:}" failed. No retries permitted until 2024-07-02 07:52:39.939612829 +0000 UTC m=+13.366412830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lmz4d" (UniqueName: "kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d") pod "cilium-dzsdw" (UID: "f4168adb-880d-4d4c-95e3-86d844793795") : configmap "kube-root-ca.crt" not found Jul 2 07:52:39.802790 kubelet[2196]: I0702 07:52:39.802748 2196 topology_manager.go:215] "Topology Admit Handler" podUID="67a54086-fe4e-49ab-b58d-1d6147f5c40e" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-s57kh" Jul 2 07:52:39.832053 kubelet[2196]: I0702 07:52:39.831982 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vkk\" (UniqueName: \"kubernetes.io/projected/67a54086-fe4e-49ab-b58d-1d6147f5c40e-kube-api-access-j9vkk\") pod \"cilium-operator-6bc8ccdb58-s57kh\" (UID: \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\") " pod="kube-system/cilium-operator-6bc8ccdb58-s57kh" Jul 2 07:52:39.832053 kubelet[2196]: I0702 07:52:39.832056 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67a54086-fe4e-49ab-b58d-1d6147f5c40e-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-s57kh\" (UID: \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\") " pod="kube-system/cilium-operator-6bc8ccdb58-s57kh" Jul 2 07:52:40.108841 kubelet[2196]: E0702 07:52:40.108740 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.109599 env[1298]: time="2024-07-02T07:52:40.109559119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-s57kh,Uid:67a54086-fe4e-49ab-b58d-1d6147f5c40e,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:40.123391 kubelet[2196]: E0702 07:52:40.121979 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.123497 env[1298]: time="2024-07-02T07:52:40.122488407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cb9mh,Uid:adcf93c0-850a-4d3d-a10a-dfe587e5814e,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:40.126484 kubelet[2196]: E0702 07:52:40.125246 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.126541 env[1298]: time="2024-07-02T07:52:40.125908761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzsdw,Uid:f4168adb-880d-4d4c-95e3-86d844793795,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:40.126804 env[1298]: time="2024-07-02T07:52:40.126739679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:40.126804 env[1298]: time="2024-07-02T07:52:40.126775046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:40.126989 env[1298]: time="2024-07-02T07:52:40.126920511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:40.127294 env[1298]: time="2024-07-02T07:52:40.127236217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c pid=2305 runtime=io.containerd.runc.v2 Jul 2 07:52:40.149147 env[1298]: time="2024-07-02T07:52:40.148952696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:40.149147 env[1298]: time="2024-07-02T07:52:40.148988815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:40.149147 env[1298]: time="2024-07-02T07:52:40.148998613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:40.149319 env[1298]: time="2024-07-02T07:52:40.149175697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9e7549f546cf2a62619878eb2ad61cd6e4e77195087c613af9cae87e8464cc5 pid=2340 runtime=io.containerd.runc.v2 Jul 2 07:52:40.161095 env[1298]: time="2024-07-02T07:52:40.160246629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:40.161095 env[1298]: time="2024-07-02T07:52:40.160298336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:40.161095 env[1298]: time="2024-07-02T07:52:40.160311851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:40.161095 env[1298]: time="2024-07-02T07:52:40.160465472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978 pid=2355 runtime=io.containerd.runc.v2 Jul 2 07:52:40.181584 env[1298]: time="2024-07-02T07:52:40.181534400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-s57kh,Uid:67a54086-fe4e-49ab-b58d-1d6147f5c40e,Namespace:kube-system,Attempt:0,} returns sandbox id \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\"" Jul 2 07:52:40.183880 kubelet[2196]: E0702 07:52:40.182558 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.189375 env[1298]: time="2024-07-02T07:52:40.189348204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:52:40.190238 env[1298]: time="2024-07-02T07:52:40.190218307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cb9mh,Uid:adcf93c0-850a-4d3d-a10a-dfe587e5814e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e7549f546cf2a62619878eb2ad61cd6e4e77195087c613af9cae87e8464cc5\"" Jul 2 07:52:40.195336 kubelet[2196]: E0702 07:52:40.194304 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.198928 env[1298]: time="2024-07-02T07:52:40.198900510Z" level=info msg="CreateContainer within sandbox \"c9e7549f546cf2a62619878eb2ad61cd6e4e77195087c613af9cae87e8464cc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:52:40.212329 env[1298]: time="2024-07-02T07:52:40.212301508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dzsdw,Uid:f4168adb-880d-4d4c-95e3-86d844793795,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\"" Jul 2 07:52:40.212913 kubelet[2196]: E0702 07:52:40.212891 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:40.525557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771618803.mount: Deactivated successfully. Jul 2 07:52:40.528470 env[1298]: time="2024-07-02T07:52:40.528435339Z" level=info msg="CreateContainer within sandbox \"c9e7549f546cf2a62619878eb2ad61cd6e4e77195087c613af9cae87e8464cc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed9983d1c95a1242fb5c19dec0174275e7914abce6a9514ccb1b202e510f9603\"" Jul 2 07:52:40.529088 env[1298]: time="2024-07-02T07:52:40.529026224Z" level=info msg="StartContainer for \"ed9983d1c95a1242fb5c19dec0174275e7914abce6a9514ccb1b202e510f9603\"" Jul 2 07:52:40.564970 env[1298]: time="2024-07-02T07:52:40.564913575Z" level=info msg="StartContainer for \"ed9983d1c95a1242fb5c19dec0174275e7914abce6a9514ccb1b202e510f9603\" returns successfully" Jul 2 07:52:40.681413 kubelet[2196]: E0702 07:52:40.681385 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:41.835090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790617273.mount: Deactivated successfully. Jul 2 07:52:42.478130 env[1298]: time="2024-07-02T07:52:42.478073918Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:42.479869 env[1298]: time="2024-07-02T07:52:42.479813859Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:42.481320 env[1298]: time="2024-07-02T07:52:42.481286496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:42.481743 env[1298]: time="2024-07-02T07:52:42.481713210Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:52:42.482423 env[1298]: time="2024-07-02T07:52:42.482396479Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:52:42.483754 env[1298]: time="2024-07-02T07:52:42.483711739Z" level=info msg="CreateContainer within sandbox \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:52:42.493376 env[1298]: time="2024-07-02T07:52:42.493338181Z" level=info msg="CreateContainer within sandbox \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\"" Jul 2 07:52:42.493780 env[1298]: time="2024-07-02T07:52:42.493675316Z" level=info msg="StartContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\"" Jul 2 07:52:42.529913 env[1298]: time="2024-07-02T07:52:42.529861461Z" level=info msg="StartContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" returns successfully" Jul 2 07:52:42.704874 kubelet[2196]: E0702 07:52:42.704841 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:42.712116 kubelet[2196]: I0702 07:52:42.712086 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cb9mh" podStartSLOduration=3.711418651 podCreationTimestamp="2024-07-02 07:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:40.688849058 +0000 UTC m=+14.115649059" watchObservedRunningTime="2024-07-02 07:52:42.711418651 +0000 UTC m=+16.138218652" Jul 2 07:52:42.712399 kubelet[2196]: I0702 07:52:42.712383 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-s57kh" podStartSLOduration=1.418064137 podCreationTimestamp="2024-07-02 07:52:39 +0000 UTC" firstStartedPulling="2024-07-02 07:52:40.187832805 +0000 UTC m=+13.614632807" lastFinishedPulling="2024-07-02 07:52:42.482132522 +0000 UTC m=+15.908932523" observedRunningTime="2024-07-02 07:52:42.711313943 +0000 UTC m=+16.138113934" watchObservedRunningTime="2024-07-02 07:52:42.712363853 +0000 UTC m=+16.139163854" Jul 2 07:52:43.710616 kubelet[2196]: E0702 07:52:43.710519 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:47.542732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442711975.mount: Deactivated successfully. Jul 2 07:52:50.957462 env[1298]: time="2024-07-02T07:52:50.957422406Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:50.959256 env[1298]: time="2024-07-02T07:52:50.959229246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:50.960626 env[1298]: time="2024-07-02T07:52:50.960597210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:52:50.961071 env[1298]: time="2024-07-02T07:52:50.961045933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:52:50.962359 env[1298]: time="2024-07-02T07:52:50.962325040Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:52:50.972188 env[1298]: time="2024-07-02T07:52:50.972152681Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\"" Jul 2 07:52:50.972625 env[1298]: time="2024-07-02T07:52:50.972589052Z" level=info msg="StartContainer for \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\"" Jul 2 07:52:51.005635 env[1298]: time="2024-07-02T07:52:51.005563341Z" level=info msg="StartContainer for \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\" returns successfully" Jul 2 07:52:51.899640 env[1298]: time="2024-07-02T07:52:51.899595050Z" level=info msg="shim disconnected" id=db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846 Jul 2 07:52:51.899640 env[1298]: time="2024-07-02T07:52:51.899637520Z" level=warning msg="cleaning up after shim disconnected" id=db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846 namespace=k8s.io Jul 2 07:52:51.899640 env[1298]: time="2024-07-02T07:52:51.899647438Z" level=info msg="cleaning up dead shim" Jul 2 07:52:51.906556 env[1298]: time="2024-07-02T07:52:51.906492043Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:52:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2665 runtime=io.containerd.runc.v2\n" Jul 2 07:52:51.912659 kubelet[2196]: E0702 07:52:51.912637 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:51.914289 env[1298]: time="2024-07-02T07:52:51.914234608Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:52:51.929114 env[1298]: time="2024-07-02T07:52:51.928734313Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\"" Jul 2 07:52:51.929597 env[1298]: time="2024-07-02T07:52:51.929547393Z" level=info msg="StartContainer for \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\"" Jul 2 07:52:51.962413 env[1298]: time="2024-07-02T07:52:51.962361099Z" level=info msg="StartContainer for \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\" returns successfully" Jul 2 07:52:51.970838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846-rootfs.mount: Deactivated successfully. Jul 2 07:52:51.974911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:52:51.975135 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:52:51.976651 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:52:51.977862 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:52:51.979997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:52:51.991436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449-rootfs.mount: Deactivated successfully. Jul 2 07:52:51.992389 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:52:51.997588 env[1298]: time="2024-07-02T07:52:51.997531509Z" level=info msg="shim disconnected" id=bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449 Jul 2 07:52:51.997687 env[1298]: time="2024-07-02T07:52:51.997592433Z" level=warning msg="cleaning up after shim disconnected" id=bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449 namespace=k8s.io Jul 2 07:52:51.997687 env[1298]: time="2024-07-02T07:52:51.997601540Z" level=info msg="cleaning up dead shim" Jul 2 07:52:52.003055 env[1298]: time="2024-07-02T07:52:52.003007730Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:52:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2730 runtime=io.containerd.runc.v2\n" Jul 2 07:52:52.902053 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:54066.service. Jul 2 07:52:52.915665 kubelet[2196]: E0702 07:52:52.915633 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:52.917230 env[1298]: time="2024-07-02T07:52:52.917196076Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:52:52.938481 env[1298]: time="2024-07-02T07:52:52.938435602Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\"" Jul 2 07:52:52.938860 env[1298]: time="2024-07-02T07:52:52.938811219Z" level=info msg="StartContainer for \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\"" Jul 2 07:52:52.942011 sshd[2743]: Accepted publickey for core from 10.0.0.1 port 54066 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:52:52.943258 sshd[2743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:52.946817 systemd-logind[1283]: New session 8 of user core. Jul 2 07:52:52.947677 systemd[1]: Started session-8.scope. Jul 2 07:52:52.970770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143907882.mount: Deactivated successfully. Jul 2 07:52:52.987422 env[1298]: time="2024-07-02T07:52:52.987382839Z" level=info msg="StartContainer for \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\" returns successfully" Jul 2 07:52:53.001637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73-rootfs.mount: Deactivated successfully. Jul 2 07:52:53.009322 env[1298]: time="2024-07-02T07:52:53.009270252Z" level=info msg="shim disconnected" id=c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73 Jul 2 07:52:53.009322 env[1298]: time="2024-07-02T07:52:53.009319494Z" level=warning msg="cleaning up after shim disconnected" id=c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73 namespace=k8s.io Jul 2 07:52:53.009322 env[1298]: time="2024-07-02T07:52:53.009327509Z" level=info msg="cleaning up dead shim" Jul 2 07:52:53.015706 env[1298]: time="2024-07-02T07:52:53.015663153Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:52:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2799 runtime=io.containerd.runc.v2\n" Jul 2 07:52:53.072724 sshd[2743]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:53.075294 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:54066.service: Deactivated successfully. Jul 2 07:52:53.076550 systemd-logind[1283]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:52:53.076554 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:52:53.077258 systemd-logind[1283]: Removed session 8. Jul 2 07:52:53.919269 kubelet[2196]: E0702 07:52:53.919236 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:53.921322 env[1298]: time="2024-07-02T07:52:53.921261538Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:52:53.940249 env[1298]: time="2024-07-02T07:52:53.940190593Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\"" Jul 2 07:52:53.940847 env[1298]: time="2024-07-02T07:52:53.940806220Z" level=info msg="StartContainer for \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\"" Jul 2 07:52:53.993633 env[1298]: time="2024-07-02T07:52:53.993569728Z" level=info msg="StartContainer for \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\" returns successfully" Jul 2 07:52:54.009373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980-rootfs.mount: Deactivated successfully. Jul 2 07:52:54.014257 env[1298]: time="2024-07-02T07:52:54.014208764Z" level=info msg="shim disconnected" id=e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980 Jul 2 07:52:54.014362 env[1298]: time="2024-07-02T07:52:54.014263456Z" level=warning msg="cleaning up after shim disconnected" id=e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980 namespace=k8s.io Jul 2 07:52:54.014362 env[1298]: time="2024-07-02T07:52:54.014273305Z" level=info msg="cleaning up dead shim" Jul 2 07:52:54.020277 env[1298]: time="2024-07-02T07:52:54.020240313Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:52:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2856 runtime=io.containerd.runc.v2\n" Jul 2 07:52:54.923262 kubelet[2196]: E0702 07:52:54.923206 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:54.925745 env[1298]: time="2024-07-02T07:52:54.925701846Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:52:54.943903 env[1298]: time="2024-07-02T07:52:54.943761911Z" level=info msg="CreateContainer within sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\"" Jul 2 07:52:54.945060 env[1298]: time="2024-07-02T07:52:54.944391294Z" level=info msg="StartContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\"" Jul 2 07:52:54.988271 env[1298]: time="2024-07-02T07:52:54.988205549Z" level=info msg="StartContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" returns successfully" Jul 2 07:52:55.076795 kubelet[2196]: I0702 07:52:55.076754 2196 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:52:55.123644 kubelet[2196]: I0702 07:52:55.123606 2196 topology_manager.go:215] "Topology Admit Handler" podUID="f4cd797e-2950-45d5-92f0-5c84ea6a8f0b" podNamespace="kube-system" podName="coredns-5dd5756b68-2qgxn" Jul 2 07:52:55.123843 kubelet[2196]: I0702 07:52:55.123828 2196 topology_manager.go:215] "Topology Admit Handler" podUID="b68740e1-4f46-4539-989b-e08355062450" podNamespace="kube-system" podName="coredns-5dd5756b68-gw2hm" Jul 2 07:52:55.129528 kubelet[2196]: I0702 07:52:55.129496 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mhz\" (UniqueName: \"kubernetes.io/projected/f4cd797e-2950-45d5-92f0-5c84ea6a8f0b-kube-api-access-s5mhz\") pod \"coredns-5dd5756b68-2qgxn\" (UID: \"f4cd797e-2950-45d5-92f0-5c84ea6a8f0b\") " pod="kube-system/coredns-5dd5756b68-2qgxn" Jul 2 07:52:55.129528 kubelet[2196]: I0702 07:52:55.129533 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6dw\" (UniqueName: \"kubernetes.io/projected/b68740e1-4f46-4539-989b-e08355062450-kube-api-access-7j6dw\") pod \"coredns-5dd5756b68-gw2hm\" (UID: \"b68740e1-4f46-4539-989b-e08355062450\") " pod="kube-system/coredns-5dd5756b68-gw2hm" Jul 2 07:52:55.129667 kubelet[2196]: I0702 07:52:55.129569 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4cd797e-2950-45d5-92f0-5c84ea6a8f0b-config-volume\") pod \"coredns-5dd5756b68-2qgxn\" (UID: \"f4cd797e-2950-45d5-92f0-5c84ea6a8f0b\") " pod="kube-system/coredns-5dd5756b68-2qgxn" Jul 2 07:52:55.129667 kubelet[2196]: I0702 07:52:55.129587 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b68740e1-4f46-4539-989b-e08355062450-config-volume\") pod \"coredns-5dd5756b68-gw2hm\" (UID: \"b68740e1-4f46-4539-989b-e08355062450\") " pod="kube-system/coredns-5dd5756b68-gw2hm" Jul 2 07:52:55.426837 kubelet[2196]: E0702 07:52:55.426801 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:55.427482 env[1298]: time="2024-07-02T07:52:55.427423674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2qgxn,Uid:f4cd797e-2950-45d5-92f0-5c84ea6a8f0b,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:55.432896 kubelet[2196]: E0702 07:52:55.432857 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:55.433272 env[1298]: time="2024-07-02T07:52:55.433224228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gw2hm,Uid:b68740e1-4f46-4539-989b-e08355062450,Namespace:kube-system,Attempt:0,}" Jul 2 07:52:55.927381 kubelet[2196]: E0702 07:52:55.927346 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:55.938392 kubelet[2196]: I0702 07:52:55.938356 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dzsdw" podStartSLOduration=6.190421257 podCreationTimestamp="2024-07-02 07:52:39 +0000 UTC" firstStartedPulling="2024-07-02 07:52:40.213343566 +0000 UTC m=+13.640143567" lastFinishedPulling="2024-07-02 07:52:50.961232195 +0000 UTC m=+24.388032196" observedRunningTime="2024-07-02 07:52:55.937013619 +0000 UTC m=+29.363813621" watchObservedRunningTime="2024-07-02 07:52:55.938309886 +0000 UTC m=+29.365109888" Jul 2 07:52:56.928677 kubelet[2196]: E0702 07:52:56.928648 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:56.952961 systemd-networkd[1072]: cilium_host: Link UP Jul 2 07:52:56.953158 systemd-networkd[1072]: cilium_net: Link UP Jul 2 07:52:56.953162 systemd-networkd[1072]: cilium_net: Gained carrier Jul 2 07:52:56.953350 systemd-networkd[1072]: cilium_host: Gained carrier Jul 2 07:52:56.957287 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:52:56.955919 systemd-networkd[1072]: cilium_host: Gained IPv6LL Jul 2 07:52:57.021635 systemd-networkd[1072]: cilium_vxlan: Link UP Jul 2 07:52:57.021640 systemd-networkd[1072]: cilium_vxlan: Gained carrier Jul 2 07:52:57.213057 kernel: NET: Registered PF_ALG protocol family Jul 2 07:52:57.748963 systemd-networkd[1072]: lxc_health: Link UP Jul 2 07:52:57.757946 systemd-networkd[1072]: lxc_health: Gained carrier Jul 2 07:52:57.758053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:52:57.930345 kubelet[2196]: E0702 07:52:57.930309 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:57.954162 systemd-networkd[1072]: cilium_net: Gained IPv6LL Jul 2 07:52:57.984422 systemd-networkd[1072]: lxcddf70121f738: Link UP Jul 2 07:52:57.994123 kernel: eth0: renamed from tmp9641e Jul 2 07:52:58.005867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:52:58.005946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcddf70121f738: link becomes ready Jul 2 07:52:58.006089 systemd-networkd[1072]: lxcddf70121f738: Gained carrier Jul 2 07:52:58.006368 systemd-networkd[1072]: lxc877b2e485010: Link UP Jul 2 07:52:58.018083 kernel: eth0: renamed from tmpadfeb Jul 2 07:52:58.023770 systemd-networkd[1072]: lxc877b2e485010: Gained carrier Jul 2 07:52:58.024243 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc877b2e485010: link becomes ready Jul 2 07:52:58.075293 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:54082.service. Jul 2 07:52:58.115466 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:52:58.116595 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:58.119991 systemd-logind[1283]: New session 9 of user core. Jul 2 07:52:58.121259 systemd[1]: Started session-9.scope. Jul 2 07:52:58.332686 sshd[3407]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:58.335594 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:54082.service: Deactivated successfully. Jul 2 07:52:58.336517 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:52:58.337695 systemd-logind[1283]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:52:58.338726 systemd-logind[1283]: Removed session 9. Jul 2 07:52:58.658165 systemd-networkd[1072]: cilium_vxlan: Gained IPv6LL Jul 2 07:52:58.931922 kubelet[2196]: E0702 07:52:58.931807 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:52:59.052345 systemd-networkd[1072]: lxcddf70121f738: Gained IPv6LL Jul 2 07:52:59.106189 systemd-networkd[1072]: lxc877b2e485010: Gained IPv6LL Jul 2 07:52:59.298164 systemd-networkd[1072]: lxc_health: Gained IPv6LL Jul 2 07:52:59.936700 kubelet[2196]: E0702 07:52:59.936674 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:01.541653 env[1298]: time="2024-07-02T07:53:01.541574198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:53:01.541653 env[1298]: time="2024-07-02T07:53:01.541619002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:53:01.541653 env[1298]: time="2024-07-02T07:53:01.541629622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:53:01.542178 env[1298]: time="2024-07-02T07:53:01.541833365Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adfeb4af99709154cdfecec36983c869e14b5ebf39215b78addf498dbaae7faa pid=3454 runtime=io.containerd.runc.v2 Jul 2 07:53:01.566378 systemd-resolved[1212]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:53:01.584149 env[1298]: time="2024-07-02T07:53:01.583952200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:53:01.584149 env[1298]: time="2024-07-02T07:53:01.583991724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:53:01.584149 env[1298]: time="2024-07-02T07:53:01.584002614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:53:01.584401 env[1298]: time="2024-07-02T07:53:01.584233318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9641e13975a76c2d6533b9aea829e7fa3f0850244f0f5db4f5e84c8fb6410917 pid=3489 runtime=io.containerd.runc.v2 Jul 2 07:53:01.590953 env[1298]: time="2024-07-02T07:53:01.590885764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gw2hm,Uid:b68740e1-4f46-4539-989b-e08355062450,Namespace:kube-system,Attempt:0,} returns sandbox id \"adfeb4af99709154cdfecec36983c869e14b5ebf39215b78addf498dbaae7faa\"" Jul 2 07:53:01.591770 kubelet[2196]: E0702 07:53:01.591570 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:01.593415 env[1298]: time="2024-07-02T07:53:01.593311610Z" level=info msg="CreateContainer within sandbox \"adfeb4af99709154cdfecec36983c869e14b5ebf39215b78addf498dbaae7faa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:53:01.606620 systemd-resolved[1212]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:53:01.616010 env[1298]: time="2024-07-02T07:53:01.615973628Z" level=info msg="CreateContainer within sandbox \"adfeb4af99709154cdfecec36983c869e14b5ebf39215b78addf498dbaae7faa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"771127d865693184a4654a470a5ec69ea53d11aff702645ccbbc37f71557760e\"" Jul 2 07:53:01.618747 env[1298]: time="2024-07-02T07:53:01.618381731Z" level=info msg="StartContainer for \"771127d865693184a4654a470a5ec69ea53d11aff702645ccbbc37f71557760e\"" Jul 2 07:53:01.627513 env[1298]: time="2024-07-02T07:53:01.627471726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-2qgxn,Uid:f4cd797e-2950-45d5-92f0-5c84ea6a8f0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9641e13975a76c2d6533b9aea829e7fa3f0850244f0f5db4f5e84c8fb6410917\"" Jul 2 07:53:01.628425 kubelet[2196]: E0702 07:53:01.628365 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:01.631515 env[1298]: time="2024-07-02T07:53:01.631451261Z" level=info msg="CreateContainer within sandbox \"9641e13975a76c2d6533b9aea829e7fa3f0850244f0f5db4f5e84c8fb6410917\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:53:01.648762 env[1298]: time="2024-07-02T07:53:01.648708807Z" level=info msg="CreateContainer within sandbox \"9641e13975a76c2d6533b9aea829e7fa3f0850244f0f5db4f5e84c8fb6410917\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"473261f78d2bf371ebcc9e8cf782d0233cd87760d6d18d1b53985be1b39b627d\"" Jul 2 07:53:01.649457 env[1298]: time="2024-07-02T07:53:01.649416045Z" level=info msg="StartContainer for \"473261f78d2bf371ebcc9e8cf782d0233cd87760d6d18d1b53985be1b39b627d\"" Jul 2 07:53:01.667548 env[1298]: time="2024-07-02T07:53:01.667501307Z" level=info msg="StartContainer for \"771127d865693184a4654a470a5ec69ea53d11aff702645ccbbc37f71557760e\" returns successfully" Jul 2 07:53:01.712152 env[1298]: time="2024-07-02T07:53:01.709645148Z" level=info msg="StartContainer for \"473261f78d2bf371ebcc9e8cf782d0233cd87760d6d18d1b53985be1b39b627d\" returns successfully" Jul 2 07:53:01.941451 kubelet[2196]: E0702 07:53:01.941360 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:01.943634 kubelet[2196]: E0702 07:53:01.943609 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:01.948641 kubelet[2196]: I0702 07:53:01.948614 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2qgxn" podStartSLOduration=22.948578172 podCreationTimestamp="2024-07-02 07:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:53:01.948190074 +0000 UTC m=+35.374990095" watchObservedRunningTime="2024-07-02 07:53:01.948578172 +0000 UTC m=+35.375378174" Jul 2 07:53:01.965741 kubelet[2196]: I0702 07:53:01.965703 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gw2hm" podStartSLOduration=22.965651132 podCreationTimestamp="2024-07-02 07:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:53:01.965208641 +0000 UTC m=+35.392008632" watchObservedRunningTime="2024-07-02 07:53:01.965651132 +0000 UTC m=+35.392451133" Jul 2 07:53:02.945089 kubelet[2196]: E0702 07:53:02.945059 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:02.945549 kubelet[2196]: E0702 07:53:02.945146 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:03.335246 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:39802.service. Jul 2 07:53:03.377015 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 39802 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:03.378338 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:03.381831 systemd-logind[1283]: New session 10 of user core. Jul 2 07:53:03.382552 systemd[1]: Started session-10.scope. Jul 2 07:53:03.493110 sshd[3612]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:03.495438 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:39802.service: Deactivated successfully. Jul 2 07:53:03.496298 systemd-logind[1283]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:53:03.496320 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:53:03.496955 systemd-logind[1283]: Removed session 10. Jul 2 07:53:03.946759 kubelet[2196]: E0702 07:53:03.946734 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:03.947209 kubelet[2196]: E0702 07:53:03.946794 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:08.495366 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:39806.service. Jul 2 07:53:08.532962 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 39806 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:08.533824 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:08.536868 systemd-logind[1283]: New session 11 of user core. Jul 2 07:53:08.537577 systemd[1]: Started session-11.scope. Jul 2 07:53:08.638164 sshd[3627]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:08.640012 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:39806.service: Deactivated successfully. Jul 2 07:53:08.640853 systemd-logind[1283]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:53:08.640874 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:53:08.641701 systemd-logind[1283]: Removed session 11. Jul 2 07:53:13.641945 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:38436.service. Jul 2 07:53:13.681840 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 38436 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:13.683094 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:13.687327 systemd-logind[1283]: New session 12 of user core. Jul 2 07:53:13.688362 systemd[1]: Started session-12.scope. Jul 2 07:53:13.792871 sshd[3644]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:13.795336 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:38442.service. Jul 2 07:53:13.798051 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:38436.service: Deactivated successfully. Jul 2 07:53:13.799116 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:53:13.799497 systemd-logind[1283]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:53:13.800240 systemd-logind[1283]: Removed session 12. Jul 2 07:53:13.838298 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 38442 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:13.839457 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:13.842684 systemd-logind[1283]: New session 13 of user core. Jul 2 07:53:13.843378 systemd[1]: Started session-13.scope. Jul 2 07:53:14.518776 sshd[3658]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:14.521052 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:38458.service. Jul 2 07:53:14.521481 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:38442.service: Deactivated successfully. Jul 2 07:53:14.522532 systemd-logind[1283]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:53:14.522540 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:53:14.523950 systemd-logind[1283]: Removed session 13. Jul 2 07:53:14.559830 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:14.560827 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:14.563972 systemd-logind[1283]: New session 14 of user core. Jul 2 07:53:14.564706 systemd[1]: Started session-14.scope. Jul 2 07:53:14.667717 sshd[3670]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:14.670045 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:38458.service: Deactivated successfully. Jul 2 07:53:14.670972 systemd-logind[1283]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:53:14.671010 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:53:14.671686 systemd-logind[1283]: Removed session 14. Jul 2 07:53:19.670964 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:38460.service. Jul 2 07:53:19.708389 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 38460 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:19.709431 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:19.712635 systemd-logind[1283]: New session 15 of user core. Jul 2 07:53:19.713298 systemd[1]: Started session-15.scope. Jul 2 07:53:19.813088 sshd[3686]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:19.815515 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:38460.service: Deactivated successfully. Jul 2 07:53:19.816285 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:53:19.817003 systemd-logind[1283]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:53:19.817792 systemd-logind[1283]: Removed session 15. Jul 2 07:53:24.816325 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:39032.service. Jul 2 07:53:24.853050 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 39032 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:24.853838 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:24.856934 systemd-logind[1283]: New session 16 of user core. Jul 2 07:53:24.857888 systemd[1]: Started session-16.scope. Jul 2 07:53:24.956945 sshd[3700]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:24.959267 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:39046.service. Jul 2 07:53:24.959696 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:39032.service: Deactivated successfully. Jul 2 07:53:24.960487 systemd-logind[1283]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:53:24.960540 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:53:24.961376 systemd-logind[1283]: Removed session 16. Jul 2 07:53:24.997354 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 39046 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:24.998196 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:25.001123 systemd-logind[1283]: New session 17 of user core. Jul 2 07:53:25.001943 systemd[1]: Started session-17.scope. Jul 2 07:53:25.152824 sshd[3712]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:25.155023 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:39056.service. Jul 2 07:53:25.156616 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:39046.service: Deactivated successfully. Jul 2 07:53:25.157721 systemd-logind[1283]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:53:25.157771 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:53:25.158542 systemd-logind[1283]: Removed session 17. Jul 2 07:53:25.195265 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:25.196228 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:25.199124 systemd-logind[1283]: New session 18 of user core. Jul 2 07:53:25.199824 systemd[1]: Started session-18.scope. Jul 2 07:53:25.934809 sshd[3724]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:25.937677 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:39060.service. Jul 2 07:53:25.939271 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:39056.service: Deactivated successfully. Jul 2 07:53:25.940734 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:53:25.941134 systemd-logind[1283]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:53:25.942504 systemd-logind[1283]: Removed session 18. Jul 2 07:53:25.979268 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 39060 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:25.980463 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:25.984549 systemd[1]: Started session-19.scope. Jul 2 07:53:25.984825 systemd-logind[1283]: New session 19 of user core. Jul 2 07:53:26.217011 sshd[3745]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:26.219275 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:39072.service. Jul 2 07:53:26.223133 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:39060.service: Deactivated successfully. Jul 2 07:53:26.224347 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:53:26.224790 systemd-logind[1283]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:53:26.226562 systemd-logind[1283]: Removed session 19. Jul 2 07:53:26.256805 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 39072 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:26.257712 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:26.260711 systemd-logind[1283]: New session 20 of user core. Jul 2 07:53:26.261528 systemd[1]: Started session-20.scope. Jul 2 07:53:26.360140 sshd[3759]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:26.362326 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:39072.service: Deactivated successfully. Jul 2 07:53:26.363059 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:53:26.363802 systemd-logind[1283]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:53:26.364492 systemd-logind[1283]: Removed session 20. Jul 2 07:53:31.364246 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:39084.service. Jul 2 07:53:31.404012 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 39084 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:31.405459 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:31.408995 systemd-logind[1283]: New session 21 of user core. Jul 2 07:53:31.409734 systemd[1]: Started session-21.scope. Jul 2 07:53:31.507786 sshd[3777]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:31.509968 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:39084.service: Deactivated successfully. Jul 2 07:53:31.510837 systemd-logind[1283]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:53:31.510856 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:53:31.511581 systemd-logind[1283]: Removed session 21. Jul 2 07:53:36.511261 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:50250.service. Jul 2 07:53:36.548398 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:36.549349 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:36.552546 systemd-logind[1283]: New session 22 of user core. Jul 2 07:53:36.553339 systemd[1]: Started session-22.scope. Jul 2 07:53:36.647784 sshd[3795]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:36.649568 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:50250.service: Deactivated successfully. Jul 2 07:53:36.650881 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:53:36.651313 systemd-logind[1283]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:53:36.651997 systemd-logind[1283]: Removed session 22. Jul 2 07:53:40.649448 kubelet[2196]: E0702 07:53:40.649407 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:41.648809 kubelet[2196]: E0702 07:53:41.648766 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:41.650549 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:50266.service. Jul 2 07:53:41.690059 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 50266 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:41.691260 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:41.694157 systemd-logind[1283]: New session 23 of user core. Jul 2 07:53:41.694809 systemd[1]: Started session-23.scope. Jul 2 07:53:41.792206 sshd[3811]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:41.794150 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:50266.service: Deactivated successfully. Jul 2 07:53:41.795134 systemd-logind[1283]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:53:41.795201 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:53:41.795927 systemd-logind[1283]: Removed session 23. Jul 2 07:53:46.795670 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:57920.service. Jul 2 07:53:46.836200 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 57920 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:46.837661 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.841786 systemd-logind[1283]: New session 24 of user core. Jul 2 07:53:46.842960 systemd[1]: Started session-24.scope. Jul 2 07:53:46.938532 sshd[3826]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.942010 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:57926.service. Jul 2 07:53:46.942705 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:57920.service: Deactivated successfully. Jul 2 07:53:46.943720 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:53:46.944400 systemd-logind[1283]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:53:46.945230 systemd-logind[1283]: Removed session 24. Jul 2 07:53:46.982025 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 57926 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:46.983297 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.986582 systemd-logind[1283]: New session 25 of user core. Jul 2 07:53:46.987399 systemd[1]: Started session-25.scope. Jul 2 07:53:48.299730 env[1298]: time="2024-07-02T07:53:48.299479677Z" level=info msg="StopContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" with timeout 30 (s)" Jul 2 07:53:48.300135 env[1298]: time="2024-07-02T07:53:48.299737247Z" level=info msg="Stop container \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" with signal terminated" Jul 2 07:53:48.308622 systemd[1]: run-containerd-runc-k8s.io-bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763-runc.mpiaL9.mount: Deactivated successfully. Jul 2 07:53:48.320587 env[1298]: time="2024-07-02T07:53:48.320521962Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:53:48.325188 env[1298]: time="2024-07-02T07:53:48.325158947Z" level=info msg="StopContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" with timeout 2 (s)" Jul 2 07:53:48.325386 env[1298]: time="2024-07-02T07:53:48.325341395Z" level=info msg="Stop container \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" with signal terminated" Jul 2 07:53:48.327867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d-rootfs.mount: Deactivated successfully. Jul 2 07:53:48.330766 systemd-networkd[1072]: lxc_health: Link DOWN Jul 2 07:53:48.330773 systemd-networkd[1072]: lxc_health: Lost carrier Jul 2 07:53:48.340903 env[1298]: time="2024-07-02T07:53:48.340860596Z" level=info msg="shim disconnected" id=521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d Jul 2 07:53:48.341014 env[1298]: time="2024-07-02T07:53:48.340908827Z" level=warning msg="cleaning up after shim disconnected" id=521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d namespace=k8s.io Jul 2 07:53:48.341014 env[1298]: time="2024-07-02T07:53:48.340920269Z" level=info msg="cleaning up dead shim" Jul 2 07:53:48.347691 env[1298]: time="2024-07-02T07:53:48.347644362Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n" Jul 2 07:53:48.350000 env[1298]: time="2024-07-02T07:53:48.349949263Z" level=info msg="StopContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" returns successfully" Jul 2 07:53:48.350522 env[1298]: time="2024-07-02T07:53:48.350496997Z" level=info msg="StopPodSandbox for \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\"" Jul 2 07:53:48.350598 env[1298]: time="2024-07-02T07:53:48.350548885Z" level=info msg="Container to stop \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.352476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c-shm.mount: Deactivated successfully. Jul 2 07:53:48.382819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763-rootfs.mount: Deactivated successfully. Jul 2 07:53:48.389634 env[1298]: time="2024-07-02T07:53:48.389554809Z" level=info msg="shim disconnected" id=bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763 Jul 2 07:53:48.389634 env[1298]: time="2024-07-02T07:53:48.389607309Z" level=warning msg="cleaning up after shim disconnected" id=bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763 namespace=k8s.io Jul 2 07:53:48.389634 env[1298]: time="2024-07-02T07:53:48.389617399Z" level=info msg="cleaning up dead shim" Jul 2 07:53:48.390140 env[1298]: time="2024-07-02T07:53:48.390090219Z" level=info msg="shim disconnected" id=271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c Jul 2 07:53:48.390249 env[1298]: time="2024-07-02T07:53:48.390207392Z" level=warning msg="cleaning up after shim disconnected" id=271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c namespace=k8s.io Jul 2 07:53:48.390249 env[1298]: time="2024-07-02T07:53:48.390229685Z" level=info msg="cleaning up dead shim" Jul 2 07:53:48.396613 env[1298]: time="2024-07-02T07:53:48.396575386Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3943 runtime=io.containerd.runc.v2\n" Jul 2 07:53:48.397279 env[1298]: time="2024-07-02T07:53:48.397245182Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3944 runtime=io.containerd.runc.v2\n" Jul 2 07:53:48.397515 env[1298]: time="2024-07-02T07:53:48.397492574Z" level=info msg="TearDown network for sandbox \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\" successfully" Jul 2 07:53:48.397551 env[1298]: time="2024-07-02T07:53:48.397515017Z" level=info msg="StopPodSandbox for \"271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c\" returns successfully" Jul 2 07:53:48.398413 env[1298]: time="2024-07-02T07:53:48.398384864Z" level=info msg="StopContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" returns successfully" Jul 2 07:53:48.398931 env[1298]: time="2024-07-02T07:53:48.398886500Z" level=info msg="StopPodSandbox for \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\"" Jul 2 07:53:48.398987 env[1298]: time="2024-07-02T07:53:48.398948979Z" level=info msg="Container to stop \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.398987 env[1298]: time="2024-07-02T07:53:48.398962645Z" level=info msg="Container to stop \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.398987 env[1298]: time="2024-07-02T07:53:48.398975911Z" level=info msg="Container to stop \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.398987 env[1298]: time="2024-07-02T07:53:48.398985739Z" level=info msg="Container to stop \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.399154 env[1298]: time="2024-07-02T07:53:48.398995137Z" level=info msg="Container to stop \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:48.421313 env[1298]: time="2024-07-02T07:53:48.421222590Z" level=info msg="shim disconnected" id=d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978 Jul 2 07:53:48.421564 env[1298]: time="2024-07-02T07:53:48.421522561Z" level=warning msg="cleaning up after shim disconnected" id=d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978 namespace=k8s.io Jul 2 07:53:48.421564 env[1298]: time="2024-07-02T07:53:48.421543502Z" level=info msg="cleaning up dead shim" Jul 2 07:53:48.427278 env[1298]: time="2024-07-02T07:53:48.427232521Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3988 runtime=io.containerd.runc.v2\n" Jul 2 07:53:48.427531 env[1298]: time="2024-07-02T07:53:48.427498999Z" level=info msg="TearDown network for sandbox \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" successfully" Jul 2 07:53:48.427531 env[1298]: time="2024-07-02T07:53:48.427523356Z" level=info msg="StopPodSandbox for \"d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978\" returns successfully" Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583872 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-hubble-tls\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583907 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4168adb-880d-4d4c-95e3-86d844793795-cilium-config-path\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583927 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cni-path\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583943 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-run\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583959 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-kernel\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.584879 kubelet[2196]: I0702 07:53:48.583975 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-cgroup\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.583989 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-lib-modules\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.584003 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-net\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.584023 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4168adb-880d-4d4c-95e3-86d844793795-clustermesh-secrets\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.584055 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-hostproc\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.584069 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-bpf-maps\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585405 kubelet[2196]: I0702 07:53:48.584087 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67a54086-fe4e-49ab-b58d-1d6147f5c40e-cilium-config-path\") pod \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\" (UID: \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\") " Jul 2 07:53:48.585587 kubelet[2196]: I0702 07:53:48.584105 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vkk\" (UniqueName: \"kubernetes.io/projected/67a54086-fe4e-49ab-b58d-1d6147f5c40e-kube-api-access-j9vkk\") pod \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\" (UID: \"67a54086-fe4e-49ab-b58d-1d6147f5c40e\") " Jul 2 07:53:48.585587 kubelet[2196]: I0702 07:53:48.584120 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-etc-cni-netd\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585587 kubelet[2196]: I0702 07:53:48.584134 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-xtables-lock\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585587 kubelet[2196]: I0702 07:53:48.584150 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmz4d\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d\") pod \"f4168adb-880d-4d4c-95e3-86d844793795\" (UID: \"f4168adb-880d-4d4c-95e3-86d844793795\") " Jul 2 07:53:48.585587 kubelet[2196]: I0702 07:53:48.584769 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.585721 kubelet[2196]: I0702 07:53:48.584804 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.585721 kubelet[2196]: I0702 07:53:48.584818 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.585721 kubelet[2196]: I0702 07:53:48.585095 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.585721 kubelet[2196]: I0702 07:53:48.585115 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.586005 kubelet[2196]: I0702 07:53:48.585966 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4168adb-880d-4d4c-95e3-86d844793795-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:53:48.586236 kubelet[2196]: I0702 07:53:48.586219 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.586327 kubelet[2196]: I0702 07:53:48.586311 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.586422 kubelet[2196]: I0702 07:53:48.586406 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.586521 kubelet[2196]: I0702 07:53:48.586505 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.587073 kubelet[2196]: I0702 07:53:48.587046 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a54086-fe4e-49ab-b58d-1d6147f5c40e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "67a54086-fe4e-49ab-b58d-1d6147f5c40e" (UID: "67a54086-fe4e-49ab-b58d-1d6147f5c40e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:53:48.587126 kubelet[2196]: I0702 07:53:48.587076 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:48.587933 kubelet[2196]: I0702 07:53:48.587897 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4168adb-880d-4d4c-95e3-86d844793795-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:53:48.588309 kubelet[2196]: I0702 07:53:48.588274 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:48.589435 kubelet[2196]: I0702 07:53:48.589392 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a54086-fe4e-49ab-b58d-1d6147f5c40e-kube-api-access-j9vkk" (OuterVolumeSpecName: "kube-api-access-j9vkk") pod "67a54086-fe4e-49ab-b58d-1d6147f5c40e" (UID: "67a54086-fe4e-49ab-b58d-1d6147f5c40e"). InnerVolumeSpecName "kube-api-access-j9vkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:48.589435 kubelet[2196]: I0702 07:53:48.589423 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d" (OuterVolumeSpecName: "kube-api-access-lmz4d") pod "f4168adb-880d-4d4c-95e3-86d844793795" (UID: "f4168adb-880d-4d4c-95e3-86d844793795"). InnerVolumeSpecName "kube-api-access-lmz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:48.684664 kubelet[2196]: I0702 07:53:48.684628 2196 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684664 kubelet[2196]: I0702 07:53:48.684656 2196 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684664 kubelet[2196]: I0702 07:53:48.684670 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lmz4d\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-kube-api-access-lmz4d\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684664 kubelet[2196]: I0702 07:53:48.684679 2196 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4168adb-880d-4d4c-95e3-86d844793795-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684697 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4168adb-880d-4d4c-95e3-86d844793795-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684709 2196 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684718 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684730 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684739 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684747 2196 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684756 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.684871 kubelet[2196]: I0702 07:53:48.684767 2196 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4168adb-880d-4d4c-95e3-86d844793795-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.685093 kubelet[2196]: I0702 07:53:48.684775 2196 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.685093 kubelet[2196]: I0702 07:53:48.684785 2196 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4168adb-880d-4d4c-95e3-86d844793795-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.685093 kubelet[2196]: I0702 07:53:48.684793 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67a54086-fe4e-49ab-b58d-1d6147f5c40e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:48.685093 kubelet[2196]: I0702 07:53:48.684802 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j9vkk\" (UniqueName: \"kubernetes.io/projected/67a54086-fe4e-49ab-b58d-1d6147f5c40e-kube-api-access-j9vkk\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:49.024453 kubelet[2196]: I0702 07:53:49.024423 2196 scope.go:117] "RemoveContainer" containerID="bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763" Jul 2 07:53:49.026435 env[1298]: time="2024-07-02T07:53:49.026386651Z" level=info msg="RemoveContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\"" Jul 2 07:53:49.032385 env[1298]: time="2024-07-02T07:53:49.032352985Z" level=info msg="RemoveContainer for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" returns successfully" Jul 2 07:53:49.032576 kubelet[2196]: I0702 07:53:49.032557 2196 scope.go:117] "RemoveContainer" containerID="e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980" Jul 2 07:53:49.034168 env[1298]: time="2024-07-02T07:53:49.034127095Z" level=info msg="RemoveContainer for \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\"" Jul 2 07:53:49.037729 env[1298]: time="2024-07-02T07:53:49.037624145Z" level=info msg="RemoveContainer for \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\" returns successfully" Jul 2 07:53:49.037826 kubelet[2196]: I0702 07:53:49.037791 2196 scope.go:117] "RemoveContainer" containerID="c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73" Jul 2 07:53:49.039547 env[1298]: time="2024-07-02T07:53:49.039316679Z" level=info msg="RemoveContainer for \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\"" Jul 2 07:53:49.042366 env[1298]: time="2024-07-02T07:53:49.042341270Z" level=info msg="RemoveContainer for \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\" returns successfully" Jul 2 07:53:49.042830 kubelet[2196]: I0702 07:53:49.042801 2196 scope.go:117] "RemoveContainer" containerID="bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449" Jul 2 07:53:49.044108 env[1298]: time="2024-07-02T07:53:49.044079882Z" level=info msg="RemoveContainer for \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\"" Jul 2 07:53:49.050545 env[1298]: time="2024-07-02T07:53:49.050508236Z" level=info msg="RemoveContainer for \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\" returns successfully" Jul 2 07:53:49.050876 kubelet[2196]: I0702 07:53:49.050853 2196 scope.go:117] "RemoveContainer" containerID="db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846" Jul 2 07:53:49.051710 env[1298]: time="2024-07-02T07:53:49.051671842Z" level=info msg="RemoveContainer for \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\"" Jul 2 07:53:49.054344 env[1298]: time="2024-07-02T07:53:49.054315027Z" level=info msg="RemoveContainer for \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\" returns successfully" Jul 2 07:53:49.054505 kubelet[2196]: I0702 07:53:49.054475 2196 scope.go:117] "RemoveContainer" containerID="bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763" Jul 2 07:53:49.054751 env[1298]: time="2024-07-02T07:53:49.054661667Z" level=error msg="ContainerStatus for \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\": not found" Jul 2 07:53:49.054874 kubelet[2196]: E0702 07:53:49.054855 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\": not found" containerID="bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763" Jul 2 07:53:49.054956 kubelet[2196]: I0702 07:53:49.054941 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763"} err="failed to get container status \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdc625637839b8f628abd843024e2c6f6065ecae7b4f644831cf6fa9ddfcf763\": not found" Jul 2 07:53:49.054956 kubelet[2196]: I0702 07:53:49.054956 2196 scope.go:117] "RemoveContainer" containerID="e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980" Jul 2 07:53:49.055139 env[1298]: time="2024-07-02T07:53:49.055096555Z" level=error msg="ContainerStatus for \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\": not found" Jul 2 07:53:49.055291 kubelet[2196]: E0702 07:53:49.055268 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\": not found" containerID="e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980" Jul 2 07:53:49.055341 kubelet[2196]: I0702 07:53:49.055293 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980"} err="failed to get container status \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\": rpc error: code = NotFound desc = an error occurred when try to find container \"e38a497698ea47ccf242be29672b9b05fd79d405988020e510fc7b3d4a06d980\": not found" Jul 2 07:53:49.055341 kubelet[2196]: I0702 07:53:49.055304 2196 scope.go:117] "RemoveContainer" containerID="c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73" Jul 2 07:53:49.055497 env[1298]: time="2024-07-02T07:53:49.055443846Z" level=error msg="ContainerStatus for \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\": not found" Jul 2 07:53:49.055639 kubelet[2196]: E0702 07:53:49.055621 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\": not found" containerID="c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73" Jul 2 07:53:49.055712 kubelet[2196]: I0702 07:53:49.055655 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73"} err="failed to get container status \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\": rpc error: code = NotFound desc = an error occurred when try to find container \"c554297b8a8b12502b5f7ecd9e836f690470b8a07b4ac64d85886046573a4c73\": not found" Jul 2 07:53:49.055712 kubelet[2196]: I0702 07:53:49.055666 2196 scope.go:117] "RemoveContainer" containerID="bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449" Jul 2 07:53:49.055857 env[1298]: time="2024-07-02T07:53:49.055817879Z" level=error msg="ContainerStatus for \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\": not found" Jul 2 07:53:49.055948 kubelet[2196]: E0702 07:53:49.055933 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\": not found" containerID="bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449" Jul 2 07:53:49.055998 kubelet[2196]: I0702 07:53:49.055952 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449"} err="failed to get container status \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcc2b2ac397087dc3c2660b692eb934c4cc6991e7406303ed55ffc7c2ea49449\": not found" Jul 2 07:53:49.055998 kubelet[2196]: I0702 07:53:49.055960 2196 scope.go:117] "RemoveContainer" containerID="db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846" Jul 2 07:53:49.056121 env[1298]: time="2024-07-02T07:53:49.056080900Z" level=error msg="ContainerStatus for \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\": not found" Jul 2 07:53:49.056244 kubelet[2196]: E0702 07:53:49.056229 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\": not found" containerID="db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846" Jul 2 07:53:49.056287 kubelet[2196]: I0702 07:53:49.056250 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846"} err="failed to get container status \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\": rpc error: code = NotFound desc = an error occurred when try to find container \"db227f5581eb17d36ddf21341ec506c032b983fc6fdc28aec52f42e9f906d846\": not found" Jul 2 07:53:49.056287 kubelet[2196]: I0702 07:53:49.056260 2196 scope.go:117] "RemoveContainer" containerID="521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d" Jul 2 07:53:49.057187 env[1298]: time="2024-07-02T07:53:49.057162901Z" level=info msg="RemoveContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\"" Jul 2 07:53:49.059561 env[1298]: time="2024-07-02T07:53:49.059532545Z" level=info msg="RemoveContainer for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" returns successfully" Jul 2 07:53:49.059720 kubelet[2196]: I0702 07:53:49.059695 2196 scope.go:117] "RemoveContainer" containerID="521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d" Jul 2 07:53:49.059907 env[1298]: time="2024-07-02T07:53:49.059855179Z" level=error msg="ContainerStatus for \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\": not found" Jul 2 07:53:49.060068 kubelet[2196]: E0702 07:53:49.059991 2196 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\": not found" containerID="521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d" Jul 2 07:53:49.060068 kubelet[2196]: I0702 07:53:49.060024 2196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d"} err="failed to get container status \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\": rpc error: code = NotFound desc = an error occurred when try to find container \"521c4838209799ebc57dbee35d35457b0467ba0a2feb7de2ee4f6ef5867e444d\": not found" Jul 2 07:53:49.306383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978-rootfs.mount: Deactivated successfully. Jul 2 07:53:49.306569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6dcade0c7a7a8e3779baddb9040d8687ef7569bfdef17ad8f3bcf88a8491978-shm.mount: Deactivated successfully. Jul 2 07:53:49.306709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-271bc19db88f6da4f7da4b469462107ca8da20024d13fda40f20d59ced18027c-rootfs.mount: Deactivated successfully. Jul 2 07:53:49.306833 systemd[1]: var-lib-kubelet-pods-f4168adb\x2d880d\x2d4d4c\x2d95e3\x2d86d844793795-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmz4d.mount: Deactivated successfully. Jul 2 07:53:49.306958 systemd[1]: var-lib-kubelet-pods-67a54086\x2dfe4e\x2d49ab\x2db58d\x2d1d6147f5c40e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9vkk.mount: Deactivated successfully. Jul 2 07:53:49.307127 systemd[1]: var-lib-kubelet-pods-f4168adb\x2d880d\x2d4d4c\x2d95e3\x2d86d844793795-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:53:49.307255 systemd[1]: var-lib-kubelet-pods-f4168adb\x2d880d\x2d4d4c\x2d95e3\x2d86d844793795-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:53:50.274437 sshd[3839]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:50.277407 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:57928.service. Jul 2 07:53:50.277985 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:57926.service: Deactivated successfully. Jul 2 07:53:50.279149 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:53:50.279754 systemd-logind[1283]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:53:50.280672 systemd-logind[1283]: Removed session 25. Jul 2 07:53:50.317425 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 57928 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:50.318403 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:50.321708 systemd-logind[1283]: New session 26 of user core. Jul 2 07:53:50.322474 systemd[1]: Started session-26.scope. Jul 2 07:53:50.651131 kubelet[2196]: I0702 07:53:50.650988 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="67a54086-fe4e-49ab-b58d-1d6147f5c40e" path="/var/lib/kubelet/pods/67a54086-fe4e-49ab-b58d-1d6147f5c40e/volumes" Jul 2 07:53:50.651533 kubelet[2196]: I0702 07:53:50.651392 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4168adb-880d-4d4c-95e3-86d844793795" path="/var/lib/kubelet/pods/f4168adb-880d-4d4c-95e3-86d844793795/volumes" Jul 2 07:53:50.737720 sshd[4007]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:50.740604 systemd[1]: Started sshd@26-10.0.0.125:22-10.0.0.1:57932.service. Jul 2 07:53:50.742931 systemd-logind[1283]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:53:50.744333 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:57928.service: Deactivated successfully. Jul 2 07:53:50.745245 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:53:50.746712 systemd-logind[1283]: Removed session 26. Jul 2 07:53:50.748000 kubelet[2196]: I0702 07:53:50.747953 2196 topology_manager.go:215] "Topology Admit Handler" podUID="ead03807-134a-4e6c-8ce6-595774b12f88" podNamespace="kube-system" podName="cilium-7phdh" Jul 2 07:53:50.748722 kubelet[2196]: E0702 07:53:50.748674 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67a54086-fe4e-49ab-b58d-1d6147f5c40e" containerName="cilium-operator" Jul 2 07:53:50.748779 kubelet[2196]: E0702 07:53:50.748743 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="cilium-agent" Jul 2 07:53:50.748779 kubelet[2196]: E0702 07:53:50.748755 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="mount-cgroup" Jul 2 07:53:50.748779 kubelet[2196]: E0702 07:53:50.748768 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="apply-sysctl-overwrites" Jul 2 07:53:50.748779 kubelet[2196]: E0702 07:53:50.748776 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="mount-bpf-fs" Jul 2 07:53:50.748871 kubelet[2196]: E0702 07:53:50.748787 2196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="clean-cilium-state" Jul 2 07:53:50.748871 kubelet[2196]: I0702 07:53:50.748837 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="67a54086-fe4e-49ab-b58d-1d6147f5c40e" containerName="cilium-operator" Jul 2 07:53:50.748871 kubelet[2196]: I0702 07:53:50.748844 2196 memory_manager.go:346] "RemoveStaleState removing state" podUID="f4168adb-880d-4d4c-95e3-86d844793795" containerName="cilium-agent" Jul 2 07:53:50.793945 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 57932 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:50.795173 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:50.798444 systemd-logind[1283]: New session 27 of user core. Jul 2 07:53:50.799292 systemd[1]: Started session-27.scope. Jul 2 07:53:50.894329 kubelet[2196]: I0702 07:53:50.894286 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-cgroup\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894329 kubelet[2196]: I0702 07:53:50.894324 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-lib-modules\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894329 kubelet[2196]: I0702 07:53:50.894341 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-xtables-lock\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894419 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m46v\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-kube-api-access-8m46v\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894449 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-bpf-maps\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894468 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-clustermesh-secrets\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894487 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cni-path\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894508 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-etc-cni-netd\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894573 kubelet[2196]: I0702 07:53:50.894544 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-hubble-tls\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894725 kubelet[2196]: I0702 07:53:50.894564 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-kernel\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894725 kubelet[2196]: I0702 07:53:50.894584 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-hostproc\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894725 kubelet[2196]: I0702 07:53:50.894633 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-config-path\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894725 kubelet[2196]: I0702 07:53:50.894662 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-run\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894815 kubelet[2196]: I0702 07:53:50.894719 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-ipsec-secrets\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.894815 kubelet[2196]: I0702 07:53:50.894754 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-net\") pod \"cilium-7phdh\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " pod="kube-system/cilium-7phdh" Jul 2 07:53:50.908988 systemd[1]: Started sshd@27-10.0.0.125:22-10.0.0.1:57948.service. Jul 2 07:53:50.914975 sshd[4019]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:50.917780 systemd[1]: sshd@26-10.0.0.125:22-10.0.0.1:57932.service: Deactivated successfully. Jul 2 07:53:50.918711 systemd-logind[1283]: Session 27 logged out. Waiting for processes to exit. Jul 2 07:53:50.918754 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 07:53:50.919786 systemd-logind[1283]: Removed session 27. Jul 2 07:53:50.923518 kubelet[2196]: E0702 07:53:50.923494 2196 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-8m46v lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-7phdh" podUID="ead03807-134a-4e6c-8ce6-595774b12f88" Jul 2 07:53:50.953257 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 57948 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:53:50.954371 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:50.957526 systemd-logind[1283]: New session 28 of user core. Jul 2 07:53:50.958240 systemd[1]: Started session-28.scope. Jul 2 07:53:51.096946 kubelet[2196]: I0702 07:53:51.096912 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-etc-cni-netd\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.096946 kubelet[2196]: I0702 07:53:51.096952 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-hubble-tls\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.096946 kubelet[2196]: I0702 07:53:51.096967 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-hostproc\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.096985 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-ipsec-secrets\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.097000 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-xtables-lock\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.097017 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-clustermesh-secrets\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.097048 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cni-path\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.097069 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-config-path\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097185 kubelet[2196]: I0702 07:53:51.097095 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-bpf-maps\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097311 kubelet[2196]: I0702 07:53:51.097111 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-run\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097311 kubelet[2196]: I0702 07:53:51.097129 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m46v\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-kube-api-access-8m46v\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097311 kubelet[2196]: I0702 07:53:51.097115 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097311 kubelet[2196]: I0702 07:53:51.097144 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-cgroup\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097311 kubelet[2196]: I0702 07:53:51.097177 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097210 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-lib-modules\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097236 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-net\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097253 2196 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-kernel\") pod \"ead03807-134a-4e6c-8ce6-595774b12f88\" (UID: \"ead03807-134a-4e6c-8ce6-595774b12f88\") " Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097295 2196 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097307 2196 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.097422 kubelet[2196]: I0702 07:53:51.097324 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097553 kubelet[2196]: I0702 07:53:51.097166 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097553 kubelet[2196]: I0702 07:53:51.097349 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097553 kubelet[2196]: I0702 07:53:51.097363 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097553 kubelet[2196]: I0702 07:53:51.097378 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-hostproc" (OuterVolumeSpecName: "hostproc") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097553 kubelet[2196]: I0702 07:53:51.097391 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.097667 kubelet[2196]: I0702 07:53:51.097405 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cni-path" (OuterVolumeSpecName: "cni-path") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.098905 kubelet[2196]: I0702 07:53:51.098877 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:53:51.098952 kubelet[2196]: I0702 07:53:51.098906 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:51.099565 kubelet[2196]: I0702 07:53:51.099535 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:51.100433 kubelet[2196]: I0702 07:53:51.100416 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:53:51.100981 kubelet[2196]: I0702 07:53:51.100956 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-kube-api-access-8m46v" (OuterVolumeSpecName: "kube-api-access-8m46v") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "kube-api-access-8m46v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:51.100995 systemd[1]: var-lib-kubelet-pods-ead03807\x2d134a\x2d4e6c\x2d8ce6\x2d595774b12f88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:53:51.101158 systemd[1]: var-lib-kubelet-pods-ead03807\x2d134a\x2d4e6c\x2d8ce6\x2d595774b12f88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:53:51.102249 kubelet[2196]: I0702 07:53:51.102224 2196 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ead03807-134a-4e6c-8ce6-595774b12f88" (UID: "ead03807-134a-4e6c-8ce6-595774b12f88"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198079 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198104 2196 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198114 2196 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198141 2196 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198149 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198159 2196 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8m46v\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-kube-api-access-8m46v\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198166 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198175 kubelet[2196]: I0702 07:53:51.198174 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198445 kubelet[2196]: I0702 07:53:51.198182 2196 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198445 kubelet[2196]: I0702 07:53:51.198190 2196 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198445 kubelet[2196]: I0702 07:53:51.198197 2196 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ead03807-134a-4e6c-8ce6-595774b12f88-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198445 kubelet[2196]: I0702 07:53:51.198204 2196 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ead03807-134a-4e6c-8ce6-595774b12f88-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.198445 kubelet[2196]: I0702 07:53:51.198212 2196 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ead03807-134a-4e6c-8ce6-595774b12f88-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:53:51.729510 kubelet[2196]: E0702 07:53:51.729485 2196 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:53:52.000155 systemd[1]: var-lib-kubelet-pods-ead03807\x2d134a\x2d4e6c\x2d8ce6\x2d595774b12f88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8m46v.mount: Deactivated successfully. Jul 2 07:53:52.000288 systemd[1]: var-lib-kubelet-pods-ead03807\x2d134a\x2d4e6c\x2d8ce6\x2d595774b12f88-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:53:52.056956 kubelet[2196]: I0702 07:53:52.056924 2196 topology_manager.go:215] "Topology Admit Handler" podUID="58efb933-0657-4e10-a585-9fb1bc9dd5fc" podNamespace="kube-system" podName="cilium-l4qmq" Jul 2 07:53:52.103588 kubelet[2196]: I0702 07:53:52.103536 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-hostproc\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103588 kubelet[2196]: I0702 07:53:52.103597 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-cilium-cgroup\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103620 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-etc-cni-netd\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103645 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58efb933-0657-4e10-a585-9fb1bc9dd5fc-cilium-config-path\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103672 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-host-proc-sys-kernel\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103704 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58efb933-0657-4e10-a585-9fb1bc9dd5fc-hubble-tls\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103735 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-bpf-maps\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103795 kubelet[2196]: I0702 07:53:52.103757 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-cni-path\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103932 kubelet[2196]: I0702 07:53:52.103779 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-lib-modules\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103932 kubelet[2196]: I0702 07:53:52.103805 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58efb933-0657-4e10-a585-9fb1bc9dd5fc-cilium-ipsec-secrets\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103932 kubelet[2196]: I0702 07:53:52.103829 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-xtables-lock\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103932 kubelet[2196]: I0702 07:53:52.103854 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58efb933-0657-4e10-a585-9fb1bc9dd5fc-clustermesh-secrets\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.103932 kubelet[2196]: I0702 07:53:52.103879 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-host-proc-sys-net\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.104060 kubelet[2196]: I0702 07:53:52.103904 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztv4h\" (UniqueName: \"kubernetes.io/projected/58efb933-0657-4e10-a585-9fb1bc9dd5fc-kube-api-access-ztv4h\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.104060 kubelet[2196]: I0702 07:53:52.103926 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58efb933-0657-4e10-a585-9fb1bc9dd5fc-cilium-run\") pod \"cilium-l4qmq\" (UID: \"58efb933-0657-4e10-a585-9fb1bc9dd5fc\") " pod="kube-system/cilium-l4qmq" Jul 2 07:53:52.359367 kubelet[2196]: E0702 07:53:52.359257 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:52.359942 env[1298]: time="2024-07-02T07:53:52.359846911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4qmq,Uid:58efb933-0657-4e10-a585-9fb1bc9dd5fc,Namespace:kube-system,Attempt:0,}" Jul 2 07:53:52.375682 env[1298]: time="2024-07-02T07:53:52.375619510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:53:52.375682 env[1298]: time="2024-07-02T07:53:52.375656711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:53:52.375682 env[1298]: time="2024-07-02T07:53:52.375667142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:53:52.375883 env[1298]: time="2024-07-02T07:53:52.375804603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b pid=4066 runtime=io.containerd.runc.v2 Jul 2 07:53:52.406223 env[1298]: time="2024-07-02T07:53:52.406178504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4qmq,Uid:58efb933-0657-4e10-a585-9fb1bc9dd5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\"" Jul 2 07:53:52.407015 kubelet[2196]: E0702 07:53:52.406983 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:52.410537 env[1298]: time="2024-07-02T07:53:52.410491881Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:53:52.422089 env[1298]: time="2024-07-02T07:53:52.422011409Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55bd4c401b4a0a5068fb78bc47374e5df83f2332144092512e1c541a183f6ba3\"" Jul 2 07:53:52.422488 env[1298]: time="2024-07-02T07:53:52.422457858Z" level=info msg="StartContainer for \"55bd4c401b4a0a5068fb78bc47374e5df83f2332144092512e1c541a183f6ba3\"" Jul 2 07:53:52.459999 env[1298]: time="2024-07-02T07:53:52.459925757Z" level=info msg="StartContainer for \"55bd4c401b4a0a5068fb78bc47374e5df83f2332144092512e1c541a183f6ba3\" returns successfully" Jul 2 07:53:52.492843 env[1298]: time="2024-07-02T07:53:52.492779276Z" level=info msg="shim disconnected" id=55bd4c401b4a0a5068fb78bc47374e5df83f2332144092512e1c541a183f6ba3 Jul 2 07:53:52.492843 env[1298]: time="2024-07-02T07:53:52.492834201Z" level=warning msg="cleaning up after shim disconnected" id=55bd4c401b4a0a5068fb78bc47374e5df83f2332144092512e1c541a183f6ba3 namespace=k8s.io Jul 2 07:53:52.492843 env[1298]: time="2024-07-02T07:53:52.492845151Z" level=info msg="cleaning up dead shim" Jul 2 07:53:52.498979 env[1298]: time="2024-07-02T07:53:52.498936070Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4151 runtime=io.containerd.runc.v2\n" Jul 2 07:53:52.650948 kubelet[2196]: I0702 07:53:52.650838 2196 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ead03807-134a-4e6c-8ce6-595774b12f88" path="/var/lib/kubelet/pods/ead03807-134a-4e6c-8ce6-595774b12f88/volumes" Jul 2 07:53:53.037968 kubelet[2196]: E0702 07:53:53.037922 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:53.040665 env[1298]: time="2024-07-02T07:53:53.040607401Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:53:53.050574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549654164.mount: Deactivated successfully. Jul 2 07:53:53.054143 env[1298]: time="2024-07-02T07:53:53.054094755Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723\"" Jul 2 07:53:53.054627 env[1298]: time="2024-07-02T07:53:53.054587011Z" level=info msg="StartContainer for \"905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723\"" Jul 2 07:53:53.178421 env[1298]: time="2024-07-02T07:53:53.178371271Z" level=info msg="StartContainer for \"905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723\" returns successfully" Jul 2 07:53:53.252979 env[1298]: time="2024-07-02T07:53:53.252923290Z" level=info msg="shim disconnected" id=905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723 Jul 2 07:53:53.252979 env[1298]: time="2024-07-02T07:53:53.252968788Z" level=warning msg="cleaning up after shim disconnected" id=905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723 namespace=k8s.io Jul 2 07:53:53.252979 env[1298]: time="2024-07-02T07:53:53.252978386Z" level=info msg="cleaning up dead shim" Jul 2 07:53:53.275399 env[1298]: time="2024-07-02T07:53:53.275336190Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4212 runtime=io.containerd.runc.v2\n" Jul 2 07:53:54.000345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-905684dc068f064e28a0119c1b98d1539cc770e8dddf7f8a217dbe5ae1746723-rootfs.mount: Deactivated successfully. Jul 2 07:53:54.040492 kubelet[2196]: E0702 07:53:54.040467 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:54.042569 env[1298]: time="2024-07-02T07:53:54.042519204Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:53:54.061125 env[1298]: time="2024-07-02T07:53:54.061070868Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d\"" Jul 2 07:53:54.061562 env[1298]: time="2024-07-02T07:53:54.061537014Z" level=info msg="StartContainer for \"6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d\"" Jul 2 07:53:54.102674 env[1298]: time="2024-07-02T07:53:54.102629768Z" level=info msg="StartContainer for \"6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d\" returns successfully" Jul 2 07:53:54.125107 env[1298]: time="2024-07-02T07:53:54.125053644Z" level=info msg="shim disconnected" id=6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d Jul 2 07:53:54.125107 env[1298]: time="2024-07-02T07:53:54.125104031Z" level=warning msg="cleaning up after shim disconnected" id=6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d namespace=k8s.io Jul 2 07:53:54.125107 env[1298]: time="2024-07-02T07:53:54.125112747Z" level=info msg="cleaning up dead shim" Jul 2 07:53:54.132082 env[1298]: time="2024-07-02T07:53:54.132012637Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4268 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:53:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 2 07:53:55.000376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dbd73c75f293b7b2a491d5380309442d75ef2568fd481a8f223472e5c91836d-rootfs.mount: Deactivated successfully. Jul 2 07:53:55.043705 kubelet[2196]: E0702 07:53:55.043670 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:55.045194 env[1298]: time="2024-07-02T07:53:55.045161854Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:53:55.058928 env[1298]: time="2024-07-02T07:53:55.058877515Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2\"" Jul 2 07:53:55.059445 env[1298]: time="2024-07-02T07:53:55.059407251Z" level=info msg="StartContainer for \"83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2\"" Jul 2 07:53:55.095088 env[1298]: time="2024-07-02T07:53:55.094422378Z" level=info msg="StartContainer for \"83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2\" returns successfully" Jul 2 07:53:55.113794 env[1298]: time="2024-07-02T07:53:55.113728796Z" level=info msg="shim disconnected" id=83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2 Jul 2 07:53:55.113794 env[1298]: time="2024-07-02T07:53:55.113772459Z" level=warning msg="cleaning up after shim disconnected" id=83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2 namespace=k8s.io Jul 2 07:53:55.113794 env[1298]: time="2024-07-02T07:53:55.113780425Z" level=info msg="cleaning up dead shim" Jul 2 07:53:55.120156 env[1298]: time="2024-07-02T07:53:55.120095288Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Jul 2 07:53:55.648811 kubelet[2196]: E0702 07:53:55.648771 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:56.001058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83f026a11a7e19a9fd3d71c8f714efb4337623032f8382ed05bcfaccfd9423b2-rootfs.mount: Deactivated successfully. Jul 2 07:53:56.047474 kubelet[2196]: E0702 07:53:56.047451 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:56.050671 env[1298]: time="2024-07-02T07:53:56.050629841Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:53:56.062781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826069097.mount: Deactivated successfully. Jul 2 07:53:56.064808 env[1298]: time="2024-07-02T07:53:56.064752718Z" level=info msg="CreateContainer within sandbox \"dd1f8a071ba551e9bcc575643ad100bd114bdb9d7ce1c5eedcd4db188f11942b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290\"" Jul 2 07:53:56.065408 env[1298]: time="2024-07-02T07:53:56.065371764Z" level=info msg="StartContainer for \"a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290\"" Jul 2 07:53:56.110029 env[1298]: time="2024-07-02T07:53:56.109978616Z" level=info msg="StartContainer for \"a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290\" returns successfully" Jul 2 07:53:56.349073 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:53:57.001367 systemd[1]: run-containerd-runc-k8s.io-a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290-runc.KuaSDH.mount: Deactivated successfully. Jul 2 07:53:57.053174 kubelet[2196]: E0702 07:53:57.053138 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:58.360964 kubelet[2196]: E0702 07:53:58.360920 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:53:58.741346 systemd-networkd[1072]: lxc_health: Link UP Jul 2 07:53:58.758092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:53:58.757835 systemd-networkd[1072]: lxc_health: Gained carrier Jul 2 07:53:59.276808 systemd[1]: run-containerd-runc-k8s.io-a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290-runc.zxmhg0.mount: Deactivated successfully. Jul 2 07:54:00.361468 kubelet[2196]: E0702 07:54:00.361435 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:00.368343 systemd-networkd[1072]: lxc_health: Gained IPv6LL Jul 2 07:54:00.376651 kubelet[2196]: I0702 07:54:00.376612 2196 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-l4qmq" podStartSLOduration=8.376572001 podCreationTimestamp="2024-07-02 07:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:53:57.06766785 +0000 UTC m=+90.494467871" watchObservedRunningTime="2024-07-02 07:54:00.376572001 +0000 UTC m=+93.803372003" Jul 2 07:54:01.058622 kubelet[2196]: E0702 07:54:01.058581 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:01.395677 systemd[1]: run-containerd-runc-k8s.io-a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290-runc.a8U0Ny.mount: Deactivated successfully. Jul 2 07:54:01.648872 kubelet[2196]: E0702 07:54:01.648750 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:02.060358 kubelet[2196]: E0702 07:54:02.060325 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:03.486890 systemd[1]: run-containerd-runc-k8s.io-a9a5ca09e533e7a199613c4e36288ae7a0e69eb4c5ae39800118d6bffa7ea290-runc.fZzxFZ.mount: Deactivated successfully. Jul 2 07:54:05.608373 sshd[4034]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:05.610353 systemd[1]: sshd@27-10.0.0.125:22-10.0.0.1:57948.service: Deactivated successfully. Jul 2 07:54:05.611395 systemd-logind[1283]: Session 28 logged out. Waiting for processes to exit. Jul 2 07:54:05.611454 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 07:54:05.612324 systemd-logind[1283]: Removed session 28. Jul 2 07:54:06.649537 kubelet[2196]: E0702 07:54:06.649492 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"