Jul 2 07:44:04.796416 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:44:04.796438 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:44:04.796446 kernel: BIOS-provided physical RAM map: Jul 2 07:44:04.796451 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 07:44:04.796456 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 07:44:04.796462 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 07:44:04.796468 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 07:44:04.796474 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 07:44:04.796480 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 07:44:04.796486 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 07:44:04.796491 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 07:44:04.796496 kernel: NX (Execute Disable) protection: active Jul 2 07:44:04.796502 kernel: SMBIOS 2.8 present. Jul 2 07:44:04.796507 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 07:44:04.796515 kernel: Hypervisor detected: KVM Jul 2 07:44:04.796521 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:44:04.796527 kernel: kvm-clock: cpu 0, msr e192001, primary cpu clock Jul 2 07:44:04.796533 kernel: kvm-clock: using sched offset of 2401062970 cycles Jul 2 07:44:04.796539 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:44:04.796545 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:44:04.796551 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:44:04.796558 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:44:04.796564 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 07:44:04.796571 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:44:04.796577 kernel: Using GB pages for direct mapping Jul 2 07:44:04.796583 kernel: ACPI: Early table checksum verification disabled Jul 2 07:44:04.796589 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 07:44:04.796595 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796601 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796607 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796613 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 07:44:04.796619 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796626 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796632 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:44:04.796638 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 07:44:04.796644 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 07:44:04.796650 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 07:44:04.796655 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 07:44:04.796661 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 07:44:04.796667 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 07:44:04.796677 kernel: No NUMA configuration found Jul 2 07:44:04.796683 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 07:44:04.796690 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 07:44:04.796696 kernel: Zone ranges: Jul 2 07:44:04.796702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:44:04.796709 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 07:44:04.796716 kernel: Normal empty Jul 2 07:44:04.796732 kernel: Movable zone start for each node Jul 2 07:44:04.796738 kernel: Early memory node ranges Jul 2 07:44:04.796745 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 07:44:04.796752 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 07:44:04.796758 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 07:44:04.796764 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:44:04.796770 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 07:44:04.796777 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 07:44:04.796785 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 07:44:04.796791 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:44:04.796797 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:44:04.796804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:44:04.796810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:44:04.796816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:44:04.796823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:44:04.796829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:44:04.796835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:44:04.796843 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:44:04.796849 kernel: TSC deadline timer available Jul 2 07:44:04.796855 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:44:04.796861 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:44:04.796868 kernel: kvm-guest: setup PV sched yield Jul 2 07:44:04.796874 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 07:44:04.796880 kernel: Booting paravirtualized kernel on KVM Jul 2 07:44:04.796887 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:44:04.796893 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:44:04.796901 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:44:04.796907 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:44:04.796913 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:44:04.796919 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:44:04.796925 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 2 07:44:04.796932 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:44:04.796938 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:44:04.796944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 07:44:04.796951 kernel: Policy zone: DMA32 Jul 2 07:44:04.796958 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:44:04.796966 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:44:04.796973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:44:04.796979 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:44:04.796986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:44:04.796992 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 134792K reserved, 0K cma-reserved) Jul 2 07:44:04.796999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:44:04.797015 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:44:04.797021 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:44:04.797029 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:44:04.797036 kernel: rcu: RCU event tracing is enabled. Jul 2 07:44:04.797042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:44:04.797049 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:44:04.797055 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:44:04.797062 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:44:04.797068 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:44:04.797074 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:44:04.797081 kernel: random: crng init done Jul 2 07:44:04.797088 kernel: Console: colour VGA+ 80x25 Jul 2 07:44:04.797094 kernel: printk: console [ttyS0] enabled Jul 2 07:44:04.797101 kernel: ACPI: Core revision 20210730 Jul 2 07:44:04.797107 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:44:04.797113 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:44:04.797120 kernel: x2apic enabled Jul 2 07:44:04.797126 kernel: Switched APIC routing to physical x2apic. Jul 2 07:44:04.797132 kernel: kvm-guest: setup PV IPIs Jul 2 07:44:04.797138 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:44:04.797146 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:44:04.797153 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:44:04.797159 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:44:04.797165 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:44:04.797172 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:44:04.797178 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:44:04.797184 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:44:04.797191 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:44:04.797198 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:44:04.797210 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:44:04.797216 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:44:04.797223 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:44:04.797231 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:44:04.797238 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:44:04.797245 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:44:04.797251 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:44:04.797258 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:44:04.797265 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:44:04.797273 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:44:04.797280 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:44:04.797287 kernel: LSM: Security Framework initializing Jul 2 07:44:04.797293 kernel: SELinux: Initializing. Jul 2 07:44:04.797300 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:44:04.797307 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:44:04.797314 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:44:04.797322 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:44:04.797328 kernel: ... version: 0 Jul 2 07:44:04.797335 kernel: ... bit width: 48 Jul 2 07:44:04.797342 kernel: ... generic registers: 6 Jul 2 07:44:04.797348 kernel: ... value mask: 0000ffffffffffff Jul 2 07:44:04.797355 kernel: ... max period: 00007fffffffffff Jul 2 07:44:04.797362 kernel: ... fixed-purpose events: 0 Jul 2 07:44:04.797370 kernel: ... event mask: 000000000000003f Jul 2 07:44:04.797377 kernel: signal: max sigframe size: 1776 Jul 2 07:44:04.797386 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:44:04.797394 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:44:04.797401 kernel: x86: Booting SMP configuration: Jul 2 07:44:04.797407 kernel: .... node #0, CPUs: #1 Jul 2 07:44:04.797414 kernel: kvm-clock: cpu 1, msr e192041, secondary cpu clock Jul 2 07:44:04.797421 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:44:04.797427 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 2 07:44:04.797434 kernel: #2 Jul 2 07:44:04.797441 kernel: kvm-clock: cpu 2, msr e192081, secondary cpu clock Jul 2 07:44:04.797447 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:44:04.797455 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 2 07:44:04.797462 kernel: #3 Jul 2 07:44:04.797468 kernel: kvm-clock: cpu 3, msr e1920c1, secondary cpu clock Jul 2 07:44:04.797475 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:44:04.797482 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 2 07:44:04.797488 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:44:04.797495 kernel: smpboot: Max logical packages: 1 Jul 2 07:44:04.797501 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:44:04.797508 kernel: devtmpfs: initialized Jul 2 07:44:04.797516 kernel: x86/mm: Memory block size: 128MB Jul 2 07:44:04.797523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:44:04.797530 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:44:04.797537 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:44:04.797543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:44:04.797550 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:44:04.797557 kernel: audit: type=2000 audit(1719906245.252:1): state=initialized audit_enabled=0 res=1 Jul 2 07:44:04.797564 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:44:04.797570 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:44:04.797578 kernel: cpuidle: using governor menu Jul 2 07:44:04.797585 kernel: ACPI: bus type PCI registered Jul 2 07:44:04.797592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:44:04.797598 kernel: dca service started, version 1.12.1 Jul 2 07:44:04.797605 kernel: PCI: Using configuration type 1 for base access Jul 2 07:44:04.797612 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:44:04.797619 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:44:04.797626 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:44:04.797632 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:44:04.797640 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:44:04.797647 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:44:04.797654 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:44:04.797660 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:44:04.797667 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:44:04.797674 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:44:04.797681 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:44:04.797687 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:44:04.797694 kernel: ACPI: Interpreter enabled Jul 2 07:44:04.797702 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:44:04.797709 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:44:04.797716 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:44:04.797730 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:44:04.797737 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:44:04.797845 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:44:04.797857 kernel: acpiphp: Slot [3] registered Jul 2 07:44:04.797864 kernel: acpiphp: Slot [4] registered Jul 2 07:44:04.797873 kernel: acpiphp: Slot [5] registered Jul 2 07:44:04.797880 kernel: acpiphp: Slot [6] registered Jul 2 07:44:04.797886 kernel: acpiphp: Slot [7] registered Jul 2 07:44:04.797893 kernel: acpiphp: Slot [8] registered Jul 2 07:44:04.797900 kernel: acpiphp: Slot [9] registered Jul 2 07:44:04.797907 kernel: acpiphp: Slot [10] registered Jul 2 07:44:04.797930 kernel: acpiphp: Slot [11] registered Jul 2 07:44:04.797947 kernel: acpiphp: Slot [12] registered Jul 2 07:44:04.797954 kernel: acpiphp: Slot [13] registered Jul 2 07:44:04.797961 kernel: acpiphp: Slot [14] registered Jul 2 07:44:04.797970 kernel: acpiphp: Slot [15] registered Jul 2 07:44:04.797976 kernel: acpiphp: Slot [16] registered Jul 2 07:44:04.797983 kernel: acpiphp: Slot [17] registered Jul 2 07:44:04.797990 kernel: acpiphp: Slot [18] registered Jul 2 07:44:04.797996 kernel: acpiphp: Slot [19] registered Jul 2 07:44:04.798013 kernel: acpiphp: Slot [20] registered Jul 2 07:44:04.798021 kernel: acpiphp: Slot [21] registered Jul 2 07:44:04.798035 kernel: acpiphp: Slot [22] registered Jul 2 07:44:04.798042 kernel: acpiphp: Slot [23] registered Jul 2 07:44:04.798054 kernel: acpiphp: Slot [24] registered Jul 2 07:44:04.798061 kernel: acpiphp: Slot [25] registered Jul 2 07:44:04.798067 kernel: acpiphp: Slot [26] registered Jul 2 07:44:04.798074 kernel: acpiphp: Slot [27] registered Jul 2 07:44:04.798081 kernel: acpiphp: Slot [28] registered Jul 2 07:44:04.798088 kernel: acpiphp: Slot [29] registered Jul 2 07:44:04.798095 kernel: acpiphp: Slot [30] registered Jul 2 07:44:04.798101 kernel: acpiphp: Slot [31] registered Jul 2 07:44:04.798108 kernel: PCI host bridge to bus 0000:00 Jul 2 07:44:04.798190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:44:04.798255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:44:04.798316 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:44:04.798429 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:44:04.798491 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 07:44:04.798551 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:44:04.798632 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:44:04.798717 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:44:04.798814 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:44:04.798883 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:44:04.798950 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:44:04.799043 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:44:04.799114 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:44:04.799181 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:44:04.799259 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:44:04.799327 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 07:44:04.799395 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 07:44:04.799472 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:44:04.799541 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 07:44:04.799609 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 07:44:04.799680 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 07:44:04.799756 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:44:04.799832 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:44:04.799902 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:44:04.799974 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 07:44:04.800074 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 07:44:04.800148 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:44:04.800220 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:44:04.800286 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 07:44:04.800351 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 07:44:04.800428 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:44:04.800497 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:44:04.800565 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 07:44:04.800632 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 07:44:04.800703 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 07:44:04.800712 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:44:04.800720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:44:04.800735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:44:04.800742 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:44:04.800749 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:44:04.800756 kernel: iommu: Default domain type: Translated Jul 2 07:44:04.800763 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:44:04.800832 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:44:04.800902 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:44:04.800970 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:44:04.800979 kernel: vgaarb: loaded Jul 2 07:44:04.800986 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:44:04.800994 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:44:04.801000 kernel: PTP clock support registered Jul 2 07:44:04.801018 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:44:04.801024 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:44:04.801034 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 07:44:04.801040 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 07:44:04.801047 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:44:04.801054 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:44:04.801061 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:44:04.801068 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:44:04.801075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:44:04.801082 kernel: pnp: PnP ACPI init Jul 2 07:44:04.801168 kernel: pnp 00:02: [dma 2] Jul 2 07:44:04.801181 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:44:04.801188 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:44:04.801195 kernel: NET: Registered PF_INET protocol family Jul 2 07:44:04.801202 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:44:04.801209 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:44:04.801215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:44:04.801222 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:44:04.801229 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:44:04.801238 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:44:04.801244 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:44:04.801251 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:44:04.801258 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:44:04.801264 kernel: NET: Registered PF_XDP protocol family Jul 2 07:44:04.801326 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:44:04.801386 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:44:04.801446 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:44:04.801504 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:44:04.801568 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 07:44:04.801637 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:44:04.801705 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:44:04.801782 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:44:04.801792 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:44:04.801799 kernel: Initialise system trusted keyrings Jul 2 07:44:04.801805 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:44:04.801813 kernel: Key type asymmetric registered Jul 2 07:44:04.801822 kernel: Asymmetric key parser 'x509' registered Jul 2 07:44:04.801829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:44:04.801836 kernel: io scheduler mq-deadline registered Jul 2 07:44:04.801842 kernel: io scheduler kyber registered Jul 2 07:44:04.801849 kernel: io scheduler bfq registered Jul 2 07:44:04.801857 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:44:04.801864 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:44:04.801871 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:44:04.801877 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:44:04.801885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:44:04.801892 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:44:04.801899 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:44:04.801906 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:44:04.801913 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:44:04.801988 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:44:04.801998 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:44:04.802071 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:44:04.802137 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:44:04 UTC (1719906244) Jul 2 07:44:04.802199 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:44:04.802208 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:44:04.802215 kernel: Segment Routing with IPv6 Jul 2 07:44:04.802222 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:44:04.802229 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:44:04.802235 kernel: Key type dns_resolver registered Jul 2 07:44:04.802242 kernel: IPI shorthand broadcast: enabled Jul 2 07:44:04.802249 kernel: sched_clock: Marking stable (406019800, 97587703)->(544986064, -41378561) Jul 2 07:44:04.802258 kernel: registered taskstats version 1 Jul 2 07:44:04.802265 kernel: Loading compiled-in X.509 certificates Jul 2 07:44:04.802272 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:44:04.802278 kernel: Key type .fscrypt registered Jul 2 07:44:04.802285 kernel: Key type fscrypt-provisioning registered Jul 2 07:44:04.802292 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:44:04.802299 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:44:04.802305 kernel: ima: No architecture policies found Jul 2 07:44:04.802313 kernel: clk: Disabling unused clocks Jul 2 07:44:04.802320 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:44:04.802327 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:44:04.802334 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:44:04.802341 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:44:04.802348 kernel: Run /init as init process Jul 2 07:44:04.802355 kernel: with arguments: Jul 2 07:44:04.802361 kernel: /init Jul 2 07:44:04.802377 kernel: with environment: Jul 2 07:44:04.802385 kernel: HOME=/ Jul 2 07:44:04.802393 kernel: TERM=linux Jul 2 07:44:04.802400 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:44:04.802409 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:44:04.802418 systemd[1]: Detected virtualization kvm. Jul 2 07:44:04.802426 systemd[1]: Detected architecture x86-64. Jul 2 07:44:04.802433 systemd[1]: Running in initrd. Jul 2 07:44:04.802441 systemd[1]: No hostname configured, using default hostname. Jul 2 07:44:04.802449 systemd[1]: Hostname set to . Jul 2 07:44:04.802457 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:44:04.802464 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:44:04.802471 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:44:04.802479 systemd[1]: Reached target cryptsetup.target. Jul 2 07:44:04.802486 systemd[1]: Reached target paths.target. Jul 2 07:44:04.802493 systemd[1]: Reached target slices.target. Jul 2 07:44:04.802500 systemd[1]: Reached target swap.target. Jul 2 07:44:04.802509 systemd[1]: Reached target timers.target. Jul 2 07:44:04.802517 systemd[1]: Listening on iscsid.socket. Jul 2 07:44:04.802524 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:44:04.802532 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:44:04.802539 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:44:04.802547 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:44:04.802554 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:44:04.802562 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:44:04.802570 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:44:04.802578 systemd[1]: Reached target sockets.target. Jul 2 07:44:04.802585 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:44:04.802593 systemd[1]: Finished network-cleanup.service. Jul 2 07:44:04.802600 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:44:04.802608 systemd[1]: Starting systemd-journald.service... Jul 2 07:44:04.802617 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:44:04.802624 systemd[1]: Starting systemd-resolved.service... Jul 2 07:44:04.802631 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:44:04.802639 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:44:04.802646 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:44:04.802654 kernel: audit: type=1130 audit(1719906244.794:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.802662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:44:04.802672 systemd-journald[199]: Journal started Jul 2 07:44:04.802708 systemd-journald[199]: Runtime Journal (/run/log/journal/55fdcf1ff58143919fb8326ef343d387) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:44:04.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.793638 systemd-modules-load[200]: Inserted module 'overlay' Jul 2 07:44:04.839404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:44:04.839419 kernel: Bridge firewalling registered Jul 2 07:44:04.839432 systemd[1]: Started systemd-journald.service. Jul 2 07:44:04.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.817566 systemd-resolved[201]: Positive Trust Anchors: Jul 2 07:44:04.843839 kernel: audit: type=1130 audit(1719906244.839:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.843856 kernel: audit: type=1130 audit(1719906244.843:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.817574 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:44:04.849583 kernel: SCSI subsystem initialized Jul 2 07:44:04.849597 kernel: audit: type=1130 audit(1719906244.849:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.817600 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:44:04.861568 kernel: audit: type=1130 audit(1719906244.852:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.819795 systemd-resolved[201]: Defaulting to hostname 'linux'. Jul 2 07:44:04.825509 systemd-modules-load[200]: Inserted module 'br_netfilter' Jul 2 07:44:04.839500 systemd[1]: Started systemd-resolved.service. Jul 2 07:44:04.843982 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:44:04.849678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:44:04.853002 systemd[1]: Reached target nss-lookup.target. Jul 2 07:44:04.862426 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:44:04.871418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:44:04.871460 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:44:04.872732 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:44:04.875393 systemd-modules-load[200]: Inserted module 'dm_multipath' Jul 2 07:44:04.876477 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:44:04.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.878784 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:44:04.885342 kernel: audit: type=1130 audit(1719906244.878:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.886795 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:44:04.891174 kernel: audit: type=1130 audit(1719906244.887:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.887983 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:44:04.893029 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:44:04.897311 kernel: audit: type=1130 audit(1719906244.893:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:04.899105 dracut-cmdline[222]: dracut-dracut-053 Jul 2 07:44:04.901562 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:44:04.958040 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:44:04.974033 kernel: iscsi: registered transport (tcp) Jul 2 07:44:04.995422 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:44:04.995472 kernel: QLogic iSCSI HBA Driver Jul 2 07:44:05.023294 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:44:05.027658 kernel: audit: type=1130 audit(1719906245.023:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.027677 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:44:05.074031 kernel: raid6: avx2x4 gen() 30990 MB/s Jul 2 07:44:05.091024 kernel: raid6: avx2x4 xor() 8183 MB/s Jul 2 07:44:05.108030 kernel: raid6: avx2x2 gen() 32132 MB/s Jul 2 07:44:05.125025 kernel: raid6: avx2x2 xor() 19279 MB/s Jul 2 07:44:05.142025 kernel: raid6: avx2x1 gen() 26548 MB/s Jul 2 07:44:05.159026 kernel: raid6: avx2x1 xor() 15309 MB/s Jul 2 07:44:05.176040 kernel: raid6: sse2x4 gen() 14802 MB/s Jul 2 07:44:05.193038 kernel: raid6: sse2x4 xor() 7582 MB/s Jul 2 07:44:05.210032 kernel: raid6: sse2x2 gen() 16268 MB/s Jul 2 07:44:05.227031 kernel: raid6: sse2x2 xor() 9815 MB/s Jul 2 07:44:05.244033 kernel: raid6: sse2x1 gen() 12522 MB/s Jul 2 07:44:05.261436 kernel: raid6: sse2x1 xor() 7768 MB/s Jul 2 07:44:05.261460 kernel: raid6: using algorithm avx2x2 gen() 32132 MB/s Jul 2 07:44:05.261472 kernel: raid6: .... xor() 19279 MB/s, rmw enabled Jul 2 07:44:05.262163 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:44:05.274033 kernel: xor: automatically using best checksumming function avx Jul 2 07:44:05.363041 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:44:05.370981 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:44:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.372000 audit: BPF prog-id=7 op=LOAD Jul 2 07:44:05.372000 audit: BPF prog-id=8 op=LOAD Jul 2 07:44:05.373003 systemd[1]: Starting systemd-udevd.service... Jul 2 07:44:05.384329 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 2 07:44:05.388089 systemd[1]: Started systemd-udevd.service. Jul 2 07:44:05.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.390423 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:44:05.400495 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jul 2 07:44:05.425513 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:44:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.427841 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:44:05.460514 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:44:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:05.489464 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:44:05.492038 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:44:05.492063 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:44:05.493516 kernel: GPT:9289727 != 19775487 Jul 2 07:44:05.493539 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:44:05.493551 kernel: GPT:9289727 != 19775487 Jul 2 07:44:05.494041 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:44:05.495518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:44:05.504037 kernel: libata version 3.00 loaded. Jul 2 07:44:05.504074 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:44:05.505272 kernel: AES CTR mode by8 optimization enabled Jul 2 07:44:05.507315 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:44:05.509033 kernel: scsi host0: ata_piix Jul 2 07:44:05.511024 kernel: scsi host1: ata_piix Jul 2 07:44:05.511136 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:44:05.511146 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:44:05.534255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:44:05.558661 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jul 2 07:44:05.560681 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:44:05.567852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:44:05.574871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:44:05.579747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:44:05.582080 systemd[1]: Starting disk-uuid.service... Jul 2 07:44:05.651605 disk-uuid[521]: Primary Header is updated. Jul 2 07:44:05.651605 disk-uuid[521]: Secondary Entries is updated. Jul 2 07:44:05.651605 disk-uuid[521]: Secondary Header is updated. Jul 2 07:44:05.656024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:44:05.659030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:44:05.668064 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:44:05.672024 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:44:05.718368 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:44:05.718536 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:44:05.736039 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:44:06.660041 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:44:06.660112 disk-uuid[522]: The operation has completed successfully. Jul 2 07:44:06.682300 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:44:06.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.682381 systemd[1]: Finished disk-uuid.service. Jul 2 07:44:06.688413 systemd[1]: Starting verity-setup.service... Jul 2 07:44:06.701034 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:44:06.720872 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:44:06.723177 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:44:06.726202 systemd[1]: Finished verity-setup.service. Jul 2 07:44:06.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.783793 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:44:06.785285 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:44:06.785335 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:44:06.787284 systemd[1]: Starting ignition-setup.service... Jul 2 07:44:06.789254 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:44:06.796029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:44:06.796050 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:44:06.797900 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:44:06.805539 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:44:06.846026 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:44:06.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.848000 audit: BPF prog-id=9 op=LOAD Jul 2 07:44:06.848863 systemd[1]: Starting systemd-networkd.service... Jul 2 07:44:06.867619 systemd-networkd[708]: lo: Link UP Jul 2 07:44:06.867627 systemd-networkd[708]: lo: Gained carrier Jul 2 07:44:06.868032 systemd-networkd[708]: Enumeration completed Jul 2 07:44:06.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.868115 systemd[1]: Started systemd-networkd.service. Jul 2 07:44:06.868217 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:44:06.869123 systemd-networkd[708]: eth0: Link UP Jul 2 07:44:06.869126 systemd-networkd[708]: eth0: Gained carrier Jul 2 07:44:06.877067 systemd[1]: Reached target network.target. Jul 2 07:44:06.881493 systemd[1]: Starting iscsiuio.service... Jul 2 07:44:06.885129 systemd[1]: Started iscsiuio.service. Jul 2 07:44:06.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.887053 systemd[1]: Starting iscsid.service... Jul 2 07:44:06.887136 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:44:06.898427 iscsid[713]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:44:06.898427 iscsid[713]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:44:06.898427 iscsid[713]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:44:06.898427 iscsid[713]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:44:06.898427 iscsid[713]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:44:06.898427 iscsid[713]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:44:06.898427 iscsid[713]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:44:06.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.898454 systemd[1]: Started iscsid.service. Jul 2 07:44:06.911423 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:44:06.920713 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:44:06.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.922365 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:44:06.923980 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:44:06.925650 systemd[1]: Reached target remote-fs.target. Jul 2 07:44:06.934961 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:44:06.941525 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:44:06.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.954860 systemd[1]: Finished ignition-setup.service. Jul 2 07:44:06.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:06.956957 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:44:06.991307 ignition[728]: Ignition 2.14.0 Jul 2 07:44:06.991319 ignition[728]: Stage: fetch-offline Jul 2 07:44:06.991375 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:06.991386 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:06.991481 ignition[728]: parsed url from cmdline: "" Jul 2 07:44:06.991484 ignition[728]: no config URL provided Jul 2 07:44:06.991489 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:44:06.991495 ignition[728]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:44:06.991517 ignition[728]: op(1): [started] loading QEMU firmware config module Jul 2 07:44:06.991521 ignition[728]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:44:07.000330 ignition[728]: op(1): [finished] loading QEMU firmware config module Jul 2 07:44:07.043875 ignition[728]: parsing config with SHA512: 7d28464c3dd619f89469ee1534af09de5b3fd9f4a08516b8693eb0a42b79a312fe2e91fa4ea677a8ce8525d3238cc12de048d4ba699ae86980c7dfd00c7a77c2 Jul 2 07:44:07.049985 unknown[728]: fetched base config from "system" Jul 2 07:44:07.049995 unknown[728]: fetched user config from "qemu" Jul 2 07:44:07.050401 ignition[728]: fetch-offline: fetch-offline passed Jul 2 07:44:07.050446 ignition[728]: Ignition finished successfully Jul 2 07:44:07.053713 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:44:07.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.055533 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:44:07.057825 systemd[1]: Starting ignition-kargs.service... Jul 2 07:44:07.067359 ignition[736]: Ignition 2.14.0 Jul 2 07:44:07.067369 ignition[736]: Stage: kargs Jul 2 07:44:07.067460 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:07.067469 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:07.068527 ignition[736]: kargs: kargs passed Jul 2 07:44:07.068568 ignition[736]: Ignition finished successfully Jul 2 07:44:07.072658 systemd[1]: Finished ignition-kargs.service. Jul 2 07:44:07.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.075075 systemd[1]: Starting ignition-disks.service... Jul 2 07:44:07.081785 ignition[742]: Ignition 2.14.0 Jul 2 07:44:07.081794 ignition[742]: Stage: disks Jul 2 07:44:07.081872 ignition[742]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:07.081881 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:07.082738 ignition[742]: disks: disks passed Jul 2 07:44:07.082769 ignition[742]: Ignition finished successfully Jul 2 07:44:07.086695 systemd[1]: Finished ignition-disks.service. Jul 2 07:44:07.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.087353 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:44:07.088509 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:44:07.090277 systemd[1]: Reached target local-fs.target. Jul 2 07:44:07.091560 systemd[1]: Reached target sysinit.target. Jul 2 07:44:07.092963 systemd[1]: Reached target basic.target. Jul 2 07:44:07.095063 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:44:07.121496 systemd-fsck[750]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:44:07.246598 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:44:07.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.248755 systemd[1]: Mounting sysroot.mount... Jul 2 07:44:07.259028 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:44:07.258999 systemd[1]: Mounted sysroot.mount. Jul 2 07:44:07.259475 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:44:07.261874 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:44:07.262541 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:44:07.262575 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:44:07.262594 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:44:07.270038 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:44:07.271519 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:44:07.275841 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:44:07.280068 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:44:07.283900 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:44:07.287058 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:44:07.310798 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:44:07.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.312376 systemd[1]: Starting ignition-mount.service... Jul 2 07:44:07.313256 systemd[1]: Starting sysroot-boot.service... Jul 2 07:44:07.319683 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:44:07.327107 ignition[803]: INFO : Ignition 2.14.0 Jul 2 07:44:07.327107 ignition[803]: INFO : Stage: mount Jul 2 07:44:07.328762 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:07.328762 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:07.328762 ignition[803]: INFO : mount: mount passed Jul 2 07:44:07.328762 ignition[803]: INFO : Ignition finished successfully Jul 2 07:44:07.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.329826 systemd[1]: Finished ignition-mount.service. Jul 2 07:44:07.334915 systemd[1]: Finished sysroot-boot.service. Jul 2 07:44:07.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:07.732981 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:44:07.740025 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jul 2 07:44:07.742087 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:44:07.742153 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:44:07.742162 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:44:07.745730 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:44:07.746912 systemd[1]: Starting ignition-files.service... Jul 2 07:44:07.758471 ignition[831]: INFO : Ignition 2.14.0 Jul 2 07:44:07.758471 ignition[831]: INFO : Stage: files Jul 2 07:44:07.760057 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:07.760057 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:07.762163 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:44:07.763633 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:44:07.763633 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:44:07.767123 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:44:07.768541 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:44:07.770236 unknown[831]: wrote ssh authorized keys file for user: core Jul 2 07:44:07.771236 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:44:07.772495 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:44:07.772495 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:44:07.807156 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:44:07.902958 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:44:07.905209 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:44:07.905209 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:44:08.261615 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:44:08.327001 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:44:08.329149 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:44:08.585263 systemd-networkd[708]: eth0: Gained IPv6LL Jul 2 07:44:08.735290 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:44:08.999359 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:44:08.999359 ignition[831]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:44:09.003995 ignition[831]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:44:09.029232 ignition[831]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:44:09.030915 ignition[831]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:44:09.032307 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:44:09.034051 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:44:09.034051 ignition[831]: INFO : files: files passed Jul 2 07:44:09.036477 ignition[831]: INFO : Ignition finished successfully Jul 2 07:44:09.038147 systemd[1]: Finished ignition-files.service. Jul 2 07:44:09.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.039351 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:44:09.040284 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:44:09.041501 systemd[1]: Starting ignition-quench.service... Jul 2 07:44:09.044842 initrd-setup-root-after-ignition[855]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:44:09.046134 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:44:09.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.046927 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:44:09.046782 systemd[1]: Reached target ignition-complete.target. Jul 2 07:44:09.050423 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:44:09.054331 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:44:09.055332 systemd[1]: Finished ignition-quench.service. Jul 2 07:44:09.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.061515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:44:09.062492 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:44:09.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.064132 systemd[1]: Reached target initrd-fs.target. Jul 2 07:44:09.065659 systemd[1]: Reached target initrd.target. Jul 2 07:44:09.067141 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:44:09.068887 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:44:09.077725 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:44:09.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.079847 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:44:09.087176 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:44:09.088819 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:44:09.090589 systemd[1]: Stopped target timers.target. Jul 2 07:44:09.092155 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:44:09.093155 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:44:09.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.094846 systemd[1]: Stopped target initrd.target. Jul 2 07:44:09.096395 systemd[1]: Stopped target basic.target. Jul 2 07:44:09.097879 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:44:09.099630 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:44:09.101354 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:44:09.103152 systemd[1]: Stopped target remote-fs.target. Jul 2 07:44:09.104723 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:44:09.106448 systemd[1]: Stopped target sysinit.target. Jul 2 07:44:09.108058 systemd[1]: Stopped target local-fs.target. Jul 2 07:44:09.109614 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:44:09.111266 systemd[1]: Stopped target swap.target. Jul 2 07:44:09.112702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:44:09.113679 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:44:09.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.115331 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:44:09.116888 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:44:09.117876 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:44:09.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.119537 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:44:09.120599 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:44:09.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.122398 systemd[1]: Stopped target paths.target. Jul 2 07:44:09.123849 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:44:09.128048 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:44:09.129845 systemd[1]: Stopped target slices.target. Jul 2 07:44:09.131354 systemd[1]: Stopped target sockets.target. Jul 2 07:44:09.132873 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:44:09.134024 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:44:09.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.135981 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:44:09.136931 systemd[1]: Stopped ignition-files.service. Jul 2 07:44:09.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.139117 systemd[1]: Stopping ignition-mount.service... Jul 2 07:44:09.140734 iscsid[713]: iscsid shutting down. Jul 2 07:44:09.141582 systemd[1]: Stopping iscsid.service... Jul 2 07:44:09.143362 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:44:09.144065 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:44:09.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.147351 ignition[871]: INFO : Ignition 2.14.0 Jul 2 07:44:09.147351 ignition[871]: INFO : Stage: umount Jul 2 07:44:09.147351 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:44:09.147351 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:44:09.147351 ignition[871]: INFO : umount: umount passed Jul 2 07:44:09.147351 ignition[871]: INFO : Ignition finished successfully Jul 2 07:44:09.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.144178 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:44:09.145664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:44:09.145756 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:44:09.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.148790 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:44:09.148864 systemd[1]: Stopped iscsid.service. Jul 2 07:44:09.149900 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:44:09.149961 systemd[1]: Stopped ignition-mount.service. Jul 2 07:44:09.152205 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:44:09.152269 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:44:09.153841 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:44:09.153865 systemd[1]: Closed iscsid.socket. Jul 2 07:44:09.154740 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:44:09.154770 systemd[1]: Stopped ignition-disks.service. Jul 2 07:44:09.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.156435 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:44:09.156462 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:44:09.157286 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:44:09.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.157314 systemd[1]: Stopped ignition-setup.service. Jul 2 07:44:09.158196 systemd[1]: Stopping iscsiuio.service... Jul 2 07:44:09.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.160876 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:44:09.161224 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:44:09.161290 systemd[1]: Stopped iscsiuio.service. Jul 2 07:44:09.162350 systemd[1]: Stopped target network.target. Jul 2 07:44:09.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.164037 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:44:09.164064 systemd[1]: Closed iscsiuio.socket. Jul 2 07:44:09.165463 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:44:09.167127 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:44:09.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.171041 systemd-networkd[708]: eth0: DHCPv6 lease lost Jul 2 07:44:09.192000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:44:09.192000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:44:09.172992 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:44:09.173088 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:44:09.175915 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:44:09.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.175941 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:44:09.177536 systemd[1]: Stopping network-cleanup.service... Jul 2 07:44:09.178421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:44:09.206578 kernel: kauditd_printk_skb: 56 callbacks suppressed Jul 2 07:44:09.206596 kernel: audit: type=1131 audit(1719906249.201:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.206618 kernel: audit: type=1131 audit(1719906249.206:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.178458 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:44:09.214990 kernel: audit: type=1131 audit(1719906249.210:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.180272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:44:09.220989 kernel: audit: type=1131 audit(1719906249.215:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.180306 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:44:09.229276 kernel: audit: type=1131 audit(1719906249.221:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.229290 kernel: audit: type=1131 audit(1719906249.224:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.229303 kernel: audit: type=1130 audit(1719906249.229:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.182566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:44:09.236146 kernel: audit: type=1131 audit(1719906249.229:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.182597 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:44:09.183493 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:44:09.185762 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:44:09.186089 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:44:09.186162 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:44:09.190149 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:44:09.190238 systemd[1]: Stopped network-cleanup.service. Jul 2 07:44:09.194435 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:44:09.194589 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:44:09.196025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:44:09.196057 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:44:09.197907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:44:09.197947 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:44:09.199496 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:44:09.199543 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:44:09.201142 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:44:09.201173 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:44:09.206617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:44:09.206649 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:44:09.213715 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:44:09.215000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:44:09.215049 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:44:09.219507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:44:09.219547 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:44:09.221064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:44:09.221104 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:44:09.225535 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:44:09.225896 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:44:09.266506 kernel: audit: type=1131 audit(1719906249.261:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.225967 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:44:09.271738 kernel: audit: type=1131 audit(1719906249.267:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.260862 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:44:09.260960 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:44:09.262032 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:44:09.266502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:44:09.266540 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:44:09.268019 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:44:09.284673 systemd[1]: Switching root. Jul 2 07:44:09.303531 systemd-journald[199]: Journal stopped Jul 2 07:44:11.924566 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Jul 2 07:44:11.924621 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:44:11.924633 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:44:11.924645 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:44:11.924654 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:44:11.924663 kernel: SELinux: policy capability open_perms=1 Jul 2 07:44:11.924672 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:44:11.924681 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:44:11.924690 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:44:11.924710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:44:11.924719 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:44:11.924728 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:44:11.924739 systemd[1]: Successfully loaded SELinux policy in 37.255ms. Jul 2 07:44:11.924758 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.455ms. Jul 2 07:44:11.924769 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:44:11.924781 systemd[1]: Detected virtualization kvm. Jul 2 07:44:11.924791 systemd[1]: Detected architecture x86-64. Jul 2 07:44:11.924802 systemd[1]: Detected first boot. Jul 2 07:44:11.924812 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:44:11.924822 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:44:11.924832 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:44:11.924842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:44:11.924855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:44:11.924867 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:44:11.924879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:44:11.924888 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:44:11.924899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:44:11.924909 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:44:11.924919 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:44:11.924929 systemd[1]: Created slice system-getty.slice. Jul 2 07:44:11.924939 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:44:11.924949 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:44:11.924960 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:44:11.924971 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:44:11.924981 systemd[1]: Created slice user.slice. Jul 2 07:44:11.924991 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:44:11.925001 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:44:11.925023 systemd[1]: Set up automount boot.automount. Jul 2 07:44:11.925033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:44:11.925043 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:44:11.925053 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:44:11.925065 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:44:11.925074 systemd[1]: Reached target integritysetup.target. Jul 2 07:44:11.925084 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:44:11.925094 systemd[1]: Reached target remote-fs.target. Jul 2 07:44:11.925104 systemd[1]: Reached target slices.target. Jul 2 07:44:11.925114 systemd[1]: Reached target swap.target. Jul 2 07:44:11.925124 systemd[1]: Reached target torcx.target. Jul 2 07:44:11.925134 systemd[1]: Reached target veritysetup.target. Jul 2 07:44:11.925145 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:44:11.925156 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:44:11.925165 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:44:11.925175 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:44:11.925185 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:44:11.925196 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:44:11.925205 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:44:11.925216 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:44:11.925225 systemd[1]: Mounting media.mount... Jul 2 07:44:11.925235 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:11.925246 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:44:11.925256 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:44:11.925266 systemd[1]: Mounting tmp.mount... Jul 2 07:44:11.925275 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:44:11.925285 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:44:11.925295 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:44:11.925305 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:44:11.925315 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:44:11.925324 systemd[1]: Starting modprobe@drm.service... Jul 2 07:44:11.925336 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:44:11.925345 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:44:11.925355 systemd[1]: Starting modprobe@loop.service... Jul 2 07:44:11.925365 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:44:11.925376 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:44:11.925385 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:44:11.925395 kernel: loop: module loaded Jul 2 07:44:11.925405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:44:11.925415 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:44:11.925426 kernel: fuse: init (API version 7.34) Jul 2 07:44:11.925436 systemd[1]: Stopped systemd-journald.service. Jul 2 07:44:11.925449 systemd[1]: Starting systemd-journald.service... Jul 2 07:44:11.925459 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:44:11.925469 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:44:11.925479 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:44:11.925489 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:44:11.925499 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:44:11.925509 systemd[1]: Stopped verity-setup.service. Jul 2 07:44:11.925520 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:11.925539 systemd-journald[984]: Journal started Jul 2 07:44:11.925575 systemd-journald[984]: Runtime Journal (/run/log/journal/55fdcf1ff58143919fb8326ef343d387) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:44:09.359000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:44:09.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:44:09.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:44:09.744000 audit: BPF prog-id=10 op=LOAD Jul 2 07:44:09.744000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:44:09.744000 audit: BPF prog-id=11 op=LOAD Jul 2 07:44:09.744000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:44:09.775000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:44:09.775000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:44:09.775000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:44:09.776000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:44:09.776000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:44:09.776000 audit: CWD cwd="/" Jul 2 07:44:09.776000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:09.776000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:09.776000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:44:11.802000 audit: BPF prog-id=12 op=LOAD Jul 2 07:44:11.802000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:44:11.802000 audit: BPF prog-id=13 op=LOAD Jul 2 07:44:11.802000 audit: BPF prog-id=14 op=LOAD Jul 2 07:44:11.802000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:44:11.802000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:44:11.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.812000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:44:11.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.906000 audit: BPF prog-id=15 op=LOAD Jul 2 07:44:11.907000 audit: BPF prog-id=16 op=LOAD Jul 2 07:44:11.907000 audit: BPF prog-id=17 op=LOAD Jul 2 07:44:11.907000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:44:11.907000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:44:11.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.922000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:44:11.922000 audit[984]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe592e7350 a2=4000 a3=7ffe592e73ec items=0 ppid=1 pid=984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:44:11.922000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:44:11.800687 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:44:09.773355 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:44:11.800697 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:44:09.773604 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:44:11.803019 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:44:09.773628 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:44:09.773665 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:44:09.773677 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:44:09.773712 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:44:09.773727 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:44:09.773970 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:44:09.774035 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:44:09.774052 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:44:09.774702 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:44:09.774747 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:44:11.927520 systemd[1]: Started systemd-journald.service. Jul 2 07:44:09.774770 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:44:09.774789 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:44:11.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:09.774807 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:44:09.774824 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:44:11.550847 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:44:11.551103 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:44:11.551195 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:44:11.551340 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:44:11.551385 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:44:11.928037 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:44:11.551437 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:44:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:44:11.928911 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:44:11.929746 systemd[1]: Mounted media.mount. Jul 2 07:44:11.930519 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:44:11.931423 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:44:11.932354 systemd[1]: Mounted tmp.mount. Jul 2 07:44:11.933245 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:44:11.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.934326 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:44:11.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.935379 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:44:11.935499 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:44:11.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.936557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:44:11.936668 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:44:11.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.937704 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:44:11.937811 systemd[1]: Finished modprobe@drm.service. Jul 2 07:44:11.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.938804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:44:11.938919 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:44:11.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.939981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:44:11.940114 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:44:11.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.941089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:44:11.941200 systemd[1]: Finished modprobe@loop.service. Jul 2 07:44:11.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.942242 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:44:11.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.943453 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:44:11.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.944587 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:44:11.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.945754 systemd[1]: Reached target network-pre.target. Jul 2 07:44:11.947454 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:44:11.949160 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:44:11.949932 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:44:11.951326 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:44:11.952909 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:44:11.953764 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:44:11.954693 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:44:11.955502 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:44:11.956476 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:44:11.958243 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:44:11.961289 systemd-journald[984]: Time spent on flushing to /var/log/journal/55fdcf1ff58143919fb8326ef343d387 is 14.817ms for 1097 entries. Jul 2 07:44:11.961289 systemd-journald[984]: System Journal (/var/log/journal/55fdcf1ff58143919fb8326ef343d387) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:44:11.993723 systemd-journald[984]: Received client request to flush runtime journal. Jul 2 07:44:11.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.961711 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:44:11.994599 udevadm[1006]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:44:11.963701 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:44:11.964668 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:44:11.965659 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:44:11.966789 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:44:11.968429 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:44:11.975467 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:44:11.978278 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:44:11.980056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:44:11.994467 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:44:11.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:11.999292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:44:12.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.399268 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:44:12.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.400000 audit: BPF prog-id=18 op=LOAD Jul 2 07:44:12.400000 audit: BPF prog-id=19 op=LOAD Jul 2 07:44:12.400000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:44:12.400000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:44:12.401470 systemd[1]: Starting systemd-udevd.service... Jul 2 07:44:12.416317 systemd-udevd[1012]: Using default interface naming scheme 'v252'. Jul 2 07:44:12.428319 systemd[1]: Started systemd-udevd.service. Jul 2 07:44:12.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.430000 audit: BPF prog-id=20 op=LOAD Jul 2 07:44:12.431301 systemd[1]: Starting systemd-networkd.service... Jul 2 07:44:12.435000 audit: BPF prog-id=21 op=LOAD Jul 2 07:44:12.435000 audit: BPF prog-id=22 op=LOAD Jul 2 07:44:12.435000 audit: BPF prog-id=23 op=LOAD Jul 2 07:44:12.436228 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:44:12.458953 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:44:12.463415 systemd[1]: Started systemd-userdbd.service. Jul 2 07:44:12.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.488072 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:44:12.495030 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:44:12.512069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:44:12.512504 systemd-networkd[1022]: lo: Link UP Jul 2 07:44:12.512508 systemd-networkd[1022]: lo: Gained carrier Jul 2 07:44:12.512863 systemd-networkd[1022]: Enumeration completed Jul 2 07:44:12.513193 systemd[1]: Started systemd-networkd.service. Jul 2 07:44:12.513526 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:44:12.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.514692 systemd-networkd[1022]: eth0: Link UP Jul 2 07:44:12.514700 systemd-networkd[1022]: eth0: Gained carrier Jul 2 07:44:12.526129 systemd-networkd[1022]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:44:12.504000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:44:12.504000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fefc7074e0 a1=3207c a2=7fe694d3abc5 a3=5 items=108 ppid=1012 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:44:12.504000 audit: CWD cwd="/" Jul 2 07:44:12.504000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=1 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=2 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=3 name=(null) inode=15532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=4 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=5 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=6 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=7 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=8 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=9 name=(null) inode=15535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=10 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=11 name=(null) inode=15536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=12 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=13 name=(null) inode=15537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=14 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=15 name=(null) inode=15538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=16 name=(null) inode=15534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=17 name=(null) inode=15539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=18 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=19 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=20 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=21 name=(null) inode=15541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=22 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=23 name=(null) inode=15542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=24 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=25 name=(null) inode=15543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=26 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=27 name=(null) inode=15544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=28 name=(null) inode=15540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=29 name=(null) inode=15545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=30 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=31 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=32 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=33 name=(null) inode=15547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=34 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=35 name=(null) inode=15548 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=36 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=37 name=(null) inode=15549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=38 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.545052 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:44:12.504000 audit: PATH item=39 name=(null) inode=15550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=40 name=(null) inode=15546 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=41 name=(null) inode=15551 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=42 name=(null) inode=15531 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=43 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=44 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=45 name=(null) inode=15553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=46 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=47 name=(null) inode=15554 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=48 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=49 name=(null) inode=15555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=50 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=51 name=(null) inode=15556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=52 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=53 name=(null) inode=15557 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=55 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=56 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=57 name=(null) inode=15559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=58 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=59 name=(null) inode=15560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=60 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=61 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=62 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=63 name=(null) inode=15562 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=64 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=65 name=(null) inode=15563 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=66 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=67 name=(null) inode=15564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=68 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=69 name=(null) inode=15565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=70 name=(null) inode=15561 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=71 name=(null) inode=15566 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=72 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=73 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=74 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=75 name=(null) inode=15568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=76 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=77 name=(null) inode=15569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=78 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=79 name=(null) inode=15570 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=80 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=81 name=(null) inode=15571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=82 name=(null) inode=15567 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=83 name=(null) inode=15572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=84 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=85 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=86 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=87 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=88 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=89 name=(null) inode=15575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=90 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=91 name=(null) inode=15576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=92 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=93 name=(null) inode=15577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=94 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=95 name=(null) inode=15578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=96 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=97 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=98 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=99 name=(null) inode=15580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=100 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=101 name=(null) inode=15581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.548017 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:44:12.504000 audit: PATH item=102 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=103 name=(null) inode=15582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=104 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=105 name=(null) inode=15583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=106 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PATH item=107 name=(null) inode=15584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:44:12.504000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:44:12.552028 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 07:44:12.599626 kernel: kvm: Nested Virtualization enabled Jul 2 07:44:12.599708 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:44:12.599722 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:44:12.599735 kernel: SVM: Virtual GIF supported Jul 2 07:44:12.617037 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:44:12.641302 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:44:12.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.643208 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:44:12.650223 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:44:12.676584 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:44:12.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.677518 systemd[1]: Reached target cryptsetup.target. Jul 2 07:44:12.679070 systemd[1]: Starting lvm2-activation.service... Jul 2 07:44:12.682116 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:44:12.703097 systemd[1]: Finished lvm2-activation.service. Jul 2 07:44:12.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.703967 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:44:12.704794 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:44:12.704815 systemd[1]: Reached target local-fs.target. Jul 2 07:44:12.705582 systemd[1]: Reached target machines.target. Jul 2 07:44:12.707161 systemd[1]: Starting ldconfig.service... Jul 2 07:44:12.708072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:44:12.708128 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:12.709401 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:44:12.711464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:44:12.713487 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:44:12.715283 systemd[1]: Starting systemd-sysext.service... Jul 2 07:44:12.717651 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1050 (bootctl) Jul 2 07:44:12.718442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:44:12.720078 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:44:12.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.726450 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:44:12.730842 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:44:12.730957 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:44:12.745063 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:44:12.749497 systemd-fsck[1059]: fsck.fat 4.2 (2021-01-31) Jul 2 07:44:12.749497 systemd-fsck[1059]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 07:44:12.750809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:44:12.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:12.753367 systemd[1]: Mounting boot.mount... Jul 2 07:44:12.764230 systemd[1]: Mounted boot.mount. Jul 2 07:44:13.048987 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:44:13.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.050830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:44:13.051362 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:44:13.052032 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:44:13.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.069036 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 07:44:13.072157 (sd-sysext)[1067]: Using extensions 'kubernetes'. Jul 2 07:44:13.072456 (sd-sysext)[1067]: Merged extensions into '/usr'. Jul 2 07:44:13.086431 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.087633 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:44:13.088665 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.090056 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:44:13.091836 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:44:13.093619 systemd[1]: Starting modprobe@loop.service... Jul 2 07:44:13.094359 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.094456 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:13.094558 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.096047 ldconfig[1049]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:44:13.096756 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:44:13.097760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:44:13.097873 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:44:13.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.098988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:44:13.099114 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:44:13.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.100248 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:44:13.100338 systemd[1]: Finished modprobe@loop.service. Jul 2 07:44:13.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.101481 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:44:13.101579 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.102378 systemd[1]: Finished systemd-sysext.service. Jul 2 07:44:13.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.117834 systemd[1]: Starting ensure-sysext.service... Jul 2 07:44:13.119324 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:44:13.123345 systemd[1]: Reloading. Jul 2 07:44:13.131985 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:44:13.132622 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:44:13.133988 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:44:13.171314 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-07-02T07:44:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:44:13.171340 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-07-02T07:44:13Z" level=info msg="torcx already run" Jul 2 07:44:13.232783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:44:13.232799 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:44:13.249595 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:44:13.299000 audit: BPF prog-id=24 op=LOAD Jul 2 07:44:13.299000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:44:13.299000 audit: BPF prog-id=25 op=LOAD Jul 2 07:44:13.299000 audit: BPF prog-id=26 op=LOAD Jul 2 07:44:13.299000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:44:13.299000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:44:13.300000 audit: BPF prog-id=27 op=LOAD Jul 2 07:44:13.300000 audit: BPF prog-id=28 op=LOAD Jul 2 07:44:13.300000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:44:13.300000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:44:13.301000 audit: BPF prog-id=29 op=LOAD Jul 2 07:44:13.301000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:44:13.301000 audit: BPF prog-id=30 op=LOAD Jul 2 07:44:13.301000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:44:13.301000 audit: BPF prog-id=31 op=LOAD Jul 2 07:44:13.301000 audit: BPF prog-id=32 op=LOAD Jul 2 07:44:13.301000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:44:13.301000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:44:13.304146 systemd[1]: Finished ldconfig.service. Jul 2 07:44:13.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.305738 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:44:13.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.308803 systemd[1]: Starting audit-rules.service... Jul 2 07:44:13.310363 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:44:13.312289 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:44:13.314000 audit: BPF prog-id=33 op=LOAD Jul 2 07:44:13.314598 systemd[1]: Starting systemd-resolved.service... Jul 2 07:44:13.315000 audit: BPF prog-id=34 op=LOAD Jul 2 07:44:13.316537 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:44:13.318191 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:44:13.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.319435 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:44:13.322000 audit[1147]: SYSTEM_BOOT pid=1147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.325132 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.325341 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.326485 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:44:13.328346 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:44:13.330079 systemd[1]: Starting modprobe@loop.service... Jul 2 07:44:13.330874 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.331021 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:13.331152 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:44:13.331256 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.332596 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:44:13.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.334076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:44:13.334170 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:44:13.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.335411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:44:13.335504 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:44:13.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.336789 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:44:13.336873 systemd[1]: Finished modprobe@loop.service. Jul 2 07:44:13.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.339102 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:44:13.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.340888 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.341074 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.341903 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:44:13.343515 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:44:13.345138 systemd[1]: Starting modprobe@loop.service... Jul 2 07:44:13.345984 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.346090 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:13.347051 systemd[1]: Starting systemd-update-done.service... Jul 2 07:44:13.347975 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:44:13.348072 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.348860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:44:13.348956 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:44:13.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.350308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:44:13.350397 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:44:13.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.351667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:44:13.351761 systemd[1]: Finished modprobe@loop.service. Jul 2 07:44:13.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:44:13.352993 systemd[1]: Finished systemd-update-done.service. Jul 2 07:44:13.353629 augenrules[1163]: No rules Jul 2 07:44:13.353000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:44:13.353000 audit[1163]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc11a9ff70 a2=420 a3=0 items=0 ppid=1136 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:44:13.353000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:44:13.354353 systemd[1]: Finished audit-rules.service. Jul 2 07:44:13.357962 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.358208 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.359716 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:44:13.361297 systemd[1]: Starting modprobe@drm.service... Jul 2 07:44:13.362899 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:44:13.364652 systemd[1]: Starting modprobe@loop.service... Jul 2 07:44:13.365519 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:44:13.365620 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:13.366664 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:44:13.367762 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:44:13.367853 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:44:13.368781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:44:13.368872 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:44:14.563556 systemd-timesyncd[1145]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:44:14.563609 systemd-timesyncd[1145]: Initial clock synchronization to Tue 2024-07-02 07:44:14.563438 UTC. Jul 2 07:44:14.563650 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:44:14.565079 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:44:14.565170 systemd[1]: Finished modprobe@drm.service. Jul 2 07:44:14.565717 systemd-resolved[1142]: Positive Trust Anchors: Jul 2 07:44:14.565731 systemd-resolved[1142]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:44:14.565758 systemd-resolved[1142]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:44:14.566336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:44:14.566430 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:44:14.567739 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:44:14.567824 systemd[1]: Finished modprobe@loop.service. Jul 2 07:44:14.569263 systemd[1]: Reached target time-set.target. Jul 2 07:44:14.570328 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:44:14.570358 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:44:14.570605 systemd[1]: Finished ensure-sysext.service. Jul 2 07:44:14.573043 systemd-resolved[1142]: Defaulting to hostname 'linux'. Jul 2 07:44:14.574298 systemd[1]: Started systemd-resolved.service. Jul 2 07:44:14.575261 systemd[1]: Reached target network.target. Jul 2 07:44:14.576062 systemd[1]: Reached target nss-lookup.target. Jul 2 07:44:14.576921 systemd[1]: Reached target sysinit.target. Jul 2 07:44:14.577782 systemd[1]: Started motdgen.path. Jul 2 07:44:14.578519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:44:14.579866 systemd[1]: Started logrotate.timer. Jul 2 07:44:14.580702 systemd[1]: Started mdadm.timer. Jul 2 07:44:14.581409 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:44:14.582326 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:44:14.582347 systemd[1]: Reached target paths.target. Jul 2 07:44:14.583127 systemd[1]: Reached target timers.target. Jul 2 07:44:14.584137 systemd[1]: Listening on dbus.socket. Jul 2 07:44:14.585821 systemd[1]: Starting docker.socket... Jul 2 07:44:14.588497 systemd[1]: Listening on sshd.socket. Jul 2 07:44:14.589352 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:14.589663 systemd[1]: Listening on docker.socket. Jul 2 07:44:14.590482 systemd[1]: Reached target sockets.target. Jul 2 07:44:14.591290 systemd[1]: Reached target basic.target. Jul 2 07:44:14.592121 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:44:14.592143 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:44:14.592890 systemd[1]: Starting containerd.service... Jul 2 07:44:14.594446 systemd[1]: Starting dbus.service... Jul 2 07:44:14.596107 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:44:14.597845 systemd[1]: Starting extend-filesystems.service... Jul 2 07:44:14.598752 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:44:14.599615 systemd[1]: Starting motdgen.service... Jul 2 07:44:14.600174 jq[1178]: false Jul 2 07:44:14.601153 systemd[1]: Starting prepare-helm.service... Jul 2 07:44:14.604903 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:44:14.606675 systemd[1]: Starting sshd-keygen.service... Jul 2 07:44:14.608525 extend-filesystems[1179]: Found loop1 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found sr0 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda1 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda2 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda3 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found usr Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda4 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda6 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda7 Jul 2 07:44:14.608525 extend-filesystems[1179]: Found vda9 Jul 2 07:44:14.608525 extend-filesystems[1179]: Checking size of /dev/vda9 Jul 2 07:44:14.617943 dbus-daemon[1177]: [system] SELinux support is enabled Jul 2 07:44:14.610324 systemd[1]: Starting systemd-logind.service... Jul 2 07:44:14.614900 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:44:14.614939 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:44:14.615260 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:44:14.616518 systemd[1]: Starting update-engine.service... Jul 2 07:44:14.619888 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:44:14.629145 extend-filesystems[1179]: Resized partition /dev/vda9 Jul 2 07:44:14.623066 systemd[1]: Started dbus.service. Jul 2 07:44:14.630721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:44:14.630862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:44:14.631094 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:44:14.631216 systemd[1]: Finished motdgen.service. Jul 2 07:44:14.632542 jq[1201]: true Jul 2 07:44:14.634232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:44:14.634366 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:44:14.640431 jq[1206]: true Jul 2 07:44:14.641253 extend-filesystems[1205]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:44:14.643694 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:44:14.643716 systemd[1]: Reached target system-config.target. Jul 2 07:44:14.644923 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:44:14.644941 systemd[1]: Reached target user-config.target. Jul 2 07:44:14.646537 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:44:14.653968 tar[1204]: linux-amd64/helm Jul 2 07:44:14.657560 update_engine[1198]: I0702 07:44:14.657423 1198 main.cc:92] Flatcar Update Engine starting Jul 2 07:44:14.659072 systemd[1]: Started update-engine.service. Jul 2 07:44:14.659357 update_engine[1198]: I0702 07:44:14.659325 1198 update_check_scheduler.cc:74] Next update check in 10m50s Jul 2 07:44:14.661146 systemd[1]: Started locksmithd.service. Jul 2 07:44:14.663907 env[1207]: time="2024-07-02T07:44:14.663867722Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:44:14.682541 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:44:14.691235 env[1207]: time="2024-07-02T07:44:14.691193859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:44:14.704199 env[1207]: time="2024-07-02T07:44:14.704174652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.704680 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:44:14.704707 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:44:14.705428 env[1207]: time="2024-07-02T07:44:14.705379502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705428 env[1207]: time="2024-07-02T07:44:14.705426209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705659 env[1207]: time="2024-07-02T07:44:14.705633458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705659 env[1207]: time="2024-07-02T07:44:14.705658214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705764 env[1207]: time="2024-07-02T07:44:14.705675657Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:44:14.705764 env[1207]: time="2024-07-02T07:44:14.705687039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705808 env[1207]: time="2024-07-02T07:44:14.705772369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.705877 systemd-logind[1191]: New seat seat0. Jul 2 07:44:14.705984 env[1207]: time="2024-07-02T07:44:14.705958738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:44:14.706377 env[1207]: time="2024-07-02T07:44:14.706275001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:44:14.706377 env[1207]: time="2024-07-02T07:44:14.706333521Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:44:14.706446 extend-filesystems[1205]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:44:14.706446 extend-filesystems[1205]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:44:14.706446 extend-filesystems[1205]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:44:14.712136 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Jul 2 07:44:14.707079 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:44:14.713183 env[1207]: time="2024-07-02T07:44:14.706622112Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:44:14.713183 env[1207]: time="2024-07-02T07:44:14.706640667Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:44:14.707245 systemd[1]: Finished extend-filesystems.service. Jul 2 07:44:14.713758 systemd[1]: Started systemd-logind.service. Jul 2 07:44:14.715577 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:44:14.716174 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717640956Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717775258Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717790317Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717817898Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717830873Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717843707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717855318Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717869345Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717883020Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717896175Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717909480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.717921523Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.718019667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:44:14.719541 env[1207]: time="2024-07-02T07:44:14.718084117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718301014Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718322625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718334407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718375474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718387166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718398086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718409788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718421420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718433673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718446587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718458910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718471684Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718589756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718604223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.720967 env[1207]: time="2024-07-02T07:44:14.718615394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.719767 systemd[1]: Started containerd.service. Jul 2 07:44:14.721310 env[1207]: time="2024-07-02T07:44:14.718627156Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:44:14.721310 env[1207]: time="2024-07-02T07:44:14.718642054Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:44:14.721310 env[1207]: time="2024-07-02T07:44:14.718651732Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:44:14.721310 env[1207]: time="2024-07-02T07:44:14.718670617Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:44:14.721310 env[1207]: time="2024-07-02T07:44:14.718708228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.718900919Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.718949761Z" level=info msg="Connect containerd service" Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.718982212Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.719441483Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.719651056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.719682114Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:44:14.721406 env[1207]: time="2024-07-02T07:44:14.719722139Z" level=info msg="containerd successfully booted in 0.056392s" Jul 2 07:44:14.724840 env[1207]: time="2024-07-02T07:44:14.724714301Z" level=info msg="Start subscribing containerd event" Jul 2 07:44:14.724840 env[1207]: time="2024-07-02T07:44:14.724783090Z" level=info msg="Start recovering state" Jul 2 07:44:14.724840 env[1207]: time="2024-07-02T07:44:14.724836691Z" level=info msg="Start event monitor" Jul 2 07:44:14.724925 env[1207]: time="2024-07-02T07:44:14.724846650Z" level=info msg="Start snapshots syncer" Jul 2 07:44:14.724925 env[1207]: time="2024-07-02T07:44:14.724857630Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:44:14.724925 env[1207]: time="2024-07-02T07:44:14.724864293Z" level=info msg="Start streaming server" Jul 2 07:44:14.731187 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:44:15.020683 tar[1204]: linux-amd64/LICENSE Jul 2 07:44:15.020860 tar[1204]: linux-amd64/README.md Jul 2 07:44:15.024751 systemd[1]: Finished prepare-helm.service. Jul 2 07:44:15.602736 systemd-networkd[1022]: eth0: Gained IPv6LL Jul 2 07:44:15.604327 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:44:15.605651 systemd[1]: Reached target network-online.target. Jul 2 07:44:15.607865 systemd[1]: Starting kubelet.service... Jul 2 07:44:15.708191 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:44:15.725747 systemd[1]: Finished sshd-keygen.service. Jul 2 07:44:15.728273 systemd[1]: Starting issuegen.service... Jul 2 07:44:15.733536 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:44:15.733655 systemd[1]: Finished issuegen.service. Jul 2 07:44:15.735628 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:44:15.740841 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:44:15.742804 systemd[1]: Started getty@tty1.service. Jul 2 07:44:15.744726 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:44:15.745766 systemd[1]: Reached target getty.target. Jul 2 07:44:16.143656 systemd[1]: Started kubelet.service. Jul 2 07:44:16.145198 systemd[1]: Reached target multi-user.target. Jul 2 07:44:16.147317 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:44:16.154095 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:44:16.154218 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:44:16.155372 systemd[1]: Startup finished in 560ms (kernel) + 4.658s (initrd) + 5.641s (userspace) = 10.860s. Jul 2 07:44:16.583056 kubelet[1258]: E0702 07:44:16.582904 1258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:44:16.584769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:44:16.584879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:44:23.755065 systemd[1]: Created slice system-sshd.slice. Jul 2 07:44:23.755990 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:45102.service. Jul 2 07:44:23.800648 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 45102 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:23.802244 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:23.809302 systemd[1]: Created slice user-500.slice. Jul 2 07:44:23.810236 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:44:23.811751 systemd-logind[1191]: New session 1 of user core. Jul 2 07:44:23.817103 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:44:23.818137 systemd[1]: Starting user@500.service... Jul 2 07:44:23.820720 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:23.889518 systemd[1271]: Queued start job for default target default.target. Jul 2 07:44:23.889975 systemd[1271]: Reached target paths.target. Jul 2 07:44:23.889994 systemd[1271]: Reached target sockets.target. Jul 2 07:44:23.890006 systemd[1271]: Reached target timers.target. Jul 2 07:44:23.890016 systemd[1271]: Reached target basic.target. Jul 2 07:44:23.890048 systemd[1271]: Reached target default.target. Jul 2 07:44:23.890070 systemd[1271]: Startup finished in 63ms. Jul 2 07:44:23.890163 systemd[1]: Started user@500.service. Jul 2 07:44:23.891112 systemd[1]: Started session-1.scope. Jul 2 07:44:23.942374 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:45108.service. Jul 2 07:44:23.986121 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 45108 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:23.987195 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:23.990560 systemd-logind[1191]: New session 2 of user core. Jul 2 07:44:23.991513 systemd[1]: Started session-2.scope. Jul 2 07:44:24.043177 sshd[1280]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:24.046108 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:45108.service: Deactivated successfully. Jul 2 07:44:24.046686 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:44:24.047234 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:44:24.048153 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:45112.service. Jul 2 07:44:24.048869 systemd-logind[1191]: Removed session 2. Jul 2 07:44:24.088303 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:24.089329 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:24.092588 systemd-logind[1191]: New session 3 of user core. Jul 2 07:44:24.093473 systemd[1]: Started session-3.scope. Jul 2 07:44:24.142158 sshd[1286]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:24.145475 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:45126.service. Jul 2 07:44:24.145917 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:45112.service: Deactivated successfully. Jul 2 07:44:24.146399 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:44:24.146834 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:44:24.147591 systemd-logind[1191]: Removed session 3. Jul 2 07:44:24.185610 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 45126 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:24.186451 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:24.189317 systemd-logind[1191]: New session 4 of user core. Jul 2 07:44:24.190008 systemd[1]: Started session-4.scope. Jul 2 07:44:24.241951 sshd[1291]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:24.244475 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:45126.service: Deactivated successfully. Jul 2 07:44:24.244975 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:44:24.245396 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:44:24.246375 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:45128.service. Jul 2 07:44:24.246989 systemd-logind[1191]: Removed session 4. Jul 2 07:44:24.286498 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 45128 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:24.287392 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:24.290518 systemd-logind[1191]: New session 5 of user core. Jul 2 07:44:24.291263 systemd[1]: Started session-5.scope. Jul 2 07:44:24.344355 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:44:24.344554 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:44:24.363932 systemd[1]: Starting docker.service... Jul 2 07:44:24.398950 env[1313]: time="2024-07-02T07:44:24.398894703Z" level=info msg="Starting up" Jul 2 07:44:24.400047 env[1313]: time="2024-07-02T07:44:24.400019603Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:44:24.400047 env[1313]: time="2024-07-02T07:44:24.400035252Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:44:24.400137 env[1313]: time="2024-07-02T07:44:24.400051522Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:44:24.400137 env[1313]: time="2024-07-02T07:44:24.400061180Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:44:24.401738 env[1313]: time="2024-07-02T07:44:24.401702038Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:44:24.401738 env[1313]: time="2024-07-02T07:44:24.401717697Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:44:24.401738 env[1313]: time="2024-07-02T07:44:24.401735881Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:44:24.401847 env[1313]: time="2024-07-02T07:44:24.401743445Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:44:24.704293 env[1313]: time="2024-07-02T07:44:24.704169823Z" level=info msg="Loading containers: start." Jul 2 07:44:24.814543 kernel: Initializing XFRM netlink socket Jul 2 07:44:24.840335 env[1313]: time="2024-07-02T07:44:24.840283464Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:44:24.886052 systemd-networkd[1022]: docker0: Link UP Jul 2 07:44:24.895352 env[1313]: time="2024-07-02T07:44:24.895315649Z" level=info msg="Loading containers: done." Jul 2 07:44:24.904743 env[1313]: time="2024-07-02T07:44:24.904698456Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:44:24.904879 env[1313]: time="2024-07-02T07:44:24.904817328Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:44:24.904908 env[1313]: time="2024-07-02T07:44:24.904888582Z" level=info msg="Daemon has completed initialization" Jul 2 07:44:24.920242 systemd[1]: Started docker.service. Jul 2 07:44:24.927643 env[1313]: time="2024-07-02T07:44:24.927596407Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:44:25.558713 env[1207]: time="2024-07-02T07:44:25.558662151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 07:44:26.521291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442268694.mount: Deactivated successfully. Jul 2 07:44:26.835647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:44:26.835841 systemd[1]: Stopped kubelet.service. Jul 2 07:44:26.837316 systemd[1]: Starting kubelet.service... Jul 2 07:44:26.911884 systemd[1]: Started kubelet.service. Jul 2 07:44:26.961182 kubelet[1454]: E0702 07:44:26.961107 1454 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:44:26.964455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:44:26.964577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:44:28.870987 env[1207]: time="2024-07-02T07:44:28.870928453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:28.872798 env[1207]: time="2024-07-02T07:44:28.872756120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:28.874200 env[1207]: time="2024-07-02T07:44:28.874176474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:28.875839 env[1207]: time="2024-07-02T07:44:28.875790190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:28.876528 env[1207]: time="2024-07-02T07:44:28.876471057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 07:44:28.885050 env[1207]: time="2024-07-02T07:44:28.885014950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 07:44:31.628485 env[1207]: time="2024-07-02T07:44:31.628426045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:31.630457 env[1207]: time="2024-07-02T07:44:31.630428039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:31.632168 env[1207]: time="2024-07-02T07:44:31.632141693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:31.634612 env[1207]: time="2024-07-02T07:44:31.634568013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:31.635283 env[1207]: time="2024-07-02T07:44:31.635260742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 07:44:31.644177 env[1207]: time="2024-07-02T07:44:31.644134434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 07:44:32.919730 env[1207]: time="2024-07-02T07:44:32.919656443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:32.921580 env[1207]: time="2024-07-02T07:44:32.921522753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:32.923401 env[1207]: time="2024-07-02T07:44:32.923354558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:32.925055 env[1207]: time="2024-07-02T07:44:32.925026604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:32.925765 env[1207]: time="2024-07-02T07:44:32.925731546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 07:44:32.936232 env[1207]: time="2024-07-02T07:44:32.936192965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:44:34.034485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580154431.mount: Deactivated successfully. Jul 2 07:44:34.991634 env[1207]: time="2024-07-02T07:44:34.991571643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:34.993886 env[1207]: time="2024-07-02T07:44:34.993855987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:34.995619 env[1207]: time="2024-07-02T07:44:34.995583647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:34.997321 env[1207]: time="2024-07-02T07:44:34.997264159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:34.997713 env[1207]: time="2024-07-02T07:44:34.997665802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:44:35.007651 env[1207]: time="2024-07-02T07:44:35.007622194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:44:35.546593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180639241.mount: Deactivated successfully. Jul 2 07:44:36.816106 env[1207]: time="2024-07-02T07:44:36.816031387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:36.817956 env[1207]: time="2024-07-02T07:44:36.817924587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:36.820135 env[1207]: time="2024-07-02T07:44:36.820107591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:36.822035 env[1207]: time="2024-07-02T07:44:36.821989370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:36.822807 env[1207]: time="2024-07-02T07:44:36.822771216Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:44:36.832349 env[1207]: time="2024-07-02T07:44:36.832310355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:44:37.215301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:44:37.215554 systemd[1]: Stopped kubelet.service. Jul 2 07:44:37.217009 systemd[1]: Starting kubelet.service... Jul 2 07:44:37.302223 systemd[1]: Started kubelet.service. Jul 2 07:44:37.567232 kubelet[1498]: E0702 07:44:37.567092 1498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:44:37.568902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:44:37.569017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:44:37.659066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679447947.mount: Deactivated successfully. Jul 2 07:44:37.664647 env[1207]: time="2024-07-02T07:44:37.664591820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:37.666327 env[1207]: time="2024-07-02T07:44:37.666297008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:37.667978 env[1207]: time="2024-07-02T07:44:37.667922316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:37.669397 env[1207]: time="2024-07-02T07:44:37.669358049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:37.669839 env[1207]: time="2024-07-02T07:44:37.669804726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:44:37.678317 env[1207]: time="2024-07-02T07:44:37.678258711Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:44:38.183583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333749467.mount: Deactivated successfully. Jul 2 07:44:40.790949 env[1207]: time="2024-07-02T07:44:40.790887851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:40.792758 env[1207]: time="2024-07-02T07:44:40.792717743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:40.794432 env[1207]: time="2024-07-02T07:44:40.794387434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:40.796130 env[1207]: time="2024-07-02T07:44:40.796094315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:40.796826 env[1207]: time="2024-07-02T07:44:40.796789589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:44:43.115494 systemd[1]: Stopped kubelet.service. Jul 2 07:44:43.117272 systemd[1]: Starting kubelet.service... Jul 2 07:44:43.132609 systemd[1]: Reloading. Jul 2 07:44:43.192602 /usr/lib/systemd/system-generators/torcx-generator[1617]: time="2024-07-02T07:44:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:44:43.192627 /usr/lib/systemd/system-generators/torcx-generator[1617]: time="2024-07-02T07:44:43Z" level=info msg="torcx already run" Jul 2 07:44:43.425759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:44:43.425774 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:44:43.442851 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:44:43.514955 systemd[1]: Started kubelet.service. Jul 2 07:44:43.518433 systemd[1]: Stopping kubelet.service... Jul 2 07:44:43.518738 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:44:43.518909 systemd[1]: Stopped kubelet.service. Jul 2 07:44:43.520262 systemd[1]: Starting kubelet.service... Jul 2 07:44:43.596081 systemd[1]: Started kubelet.service. Jul 2 07:44:43.638783 kubelet[1669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:44:43.638783 kubelet[1669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:44:43.638783 kubelet[1669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:44:43.639291 kubelet[1669]: I0702 07:44:43.638873 1669 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:44:44.037635 kubelet[1669]: I0702 07:44:44.037604 1669 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:44:44.037635 kubelet[1669]: I0702 07:44:44.037633 1669 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:44:44.037870 kubelet[1669]: I0702 07:44:44.037856 1669 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:44:44.057541 kubelet[1669]: I0702 07:44:44.057486 1669 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:44:44.057926 kubelet[1669]: E0702 07:44:44.057909 1669 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.069249 kubelet[1669]: I0702 07:44:44.069212 1669 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:44:44.069446 kubelet[1669]: I0702 07:44:44.069423 1669 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:44:44.069680 kubelet[1669]: I0702 07:44:44.069652 1669 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:44:44.070177 kubelet[1669]: I0702 07:44:44.070152 1669 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:44:44.070217 kubelet[1669]: I0702 07:44:44.070179 1669 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:44:44.070341 kubelet[1669]: I0702 07:44:44.070318 1669 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:44:44.070445 kubelet[1669]: I0702 07:44:44.070424 1669 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:44:44.070490 kubelet[1669]: I0702 07:44:44.070476 1669 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:44:44.070539 kubelet[1669]: I0702 07:44:44.070527 1669 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:44:44.070567 kubelet[1669]: I0702 07:44:44.070558 1669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:44:44.073321 kubelet[1669]: I0702 07:44:44.073300 1669 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:44:44.081907 kubelet[1669]: W0702 07:44:44.081846 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.081907 kubelet[1669]: E0702 07:44:44.081897 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.088570 kubelet[1669]: I0702 07:44:44.088540 1669 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:44:44.090349 kubelet[1669]: W0702 07:44:44.090314 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.090400 kubelet[1669]: E0702 07:44:44.090358 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.090752 kubelet[1669]: W0702 07:44:44.090735 1669 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:44:44.091216 kubelet[1669]: I0702 07:44:44.091197 1669 server.go:1256] "Started kubelet" Jul 2 07:44:44.091354 kubelet[1669]: I0702 07:44:44.091337 1669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:44:44.092165 kubelet[1669]: I0702 07:44:44.091629 1669 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:44:44.092364 kubelet[1669]: I0702 07:44:44.092352 1669 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:44:44.093467 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:44:44.093564 kubelet[1669]: I0702 07:44:44.093552 1669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:44:44.093859 kubelet[1669]: I0702 07:44:44.093841 1669 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:44:44.097937 kubelet[1669]: I0702 07:44:44.097918 1669 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:44:44.098007 kubelet[1669]: I0702 07:44:44.097990 1669 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:44:44.098047 kubelet[1669]: I0702 07:44:44.098038 1669 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:44:44.098324 kubelet[1669]: W0702 07:44:44.098278 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.098324 kubelet[1669]: E0702 07:44:44.098313 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.101491 kubelet[1669]: E0702 07:44:44.101366 1669 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:44:44.101769 kubelet[1669]: I0702 07:44:44.101756 1669 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:44:44.101769 kubelet[1669]: I0702 07:44:44.101768 1669 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:44:44.101840 kubelet[1669]: I0702 07:44:44.101814 1669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:44:44.102069 kubelet[1669]: E0702 07:44:44.102050 1669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Jul 2 07:44:44.104537 kubelet[1669]: E0702 07:44:44.103960 1669 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de55a726374278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:44:44.091163256 +0000 UTC m=+0.491635163,LastTimestamp:2024-07-02 07:44:44.091163256 +0000 UTC m=+0.491635163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:44:44.110848 kubelet[1669]: I0702 07:44:44.110823 1669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:44:44.111647 kubelet[1669]: I0702 07:44:44.111634 1669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:44:44.111700 kubelet[1669]: I0702 07:44:44.111654 1669 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:44:44.111700 kubelet[1669]: I0702 07:44:44.111671 1669 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:44:44.111756 kubelet[1669]: E0702 07:44:44.111710 1669 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:44:44.112914 kubelet[1669]: W0702 07:44:44.112577 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.112966 kubelet[1669]: E0702 07:44:44.112917 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.113747 kubelet[1669]: I0702 07:44:44.113728 1669 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:44:44.113747 kubelet[1669]: I0702 07:44:44.113743 1669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:44:44.113818 kubelet[1669]: I0702 07:44:44.113755 1669 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:44:44.199464 kubelet[1669]: I0702 07:44:44.199452 1669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:44.199773 kubelet[1669]: E0702 07:44:44.199747 1669 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 2 07:44:44.211979 kubelet[1669]: E0702 07:44:44.211953 1669 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:44:44.302669 kubelet[1669]: E0702 07:44:44.302609 1669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Jul 2 07:44:44.399119 kubelet[1669]: I0702 07:44:44.399090 1669 policy_none.go:49] "None policy: Start" Jul 2 07:44:44.399756 kubelet[1669]: I0702 07:44:44.399728 1669 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:44:44.399822 kubelet[1669]: I0702 07:44:44.399764 1669 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:44:44.400593 kubelet[1669]: I0702 07:44:44.400579 1669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:44.400879 kubelet[1669]: E0702 07:44:44.400854 1669 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 2 07:44:44.405906 systemd[1]: Created slice kubepods.slice. Jul 2 07:44:44.408989 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:44:44.411017 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:44:44.412554 kubelet[1669]: E0702 07:44:44.412534 1669 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:44:44.420211 kubelet[1669]: I0702 07:44:44.420188 1669 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:44:44.420375 kubelet[1669]: I0702 07:44:44.420364 1669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:44:44.421349 kubelet[1669]: E0702 07:44:44.421322 1669 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:44:44.686726 kubelet[1669]: E0702 07:44:44.686613 1669 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de55a726374278 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:44:44.091163256 +0000 UTC m=+0.491635163,LastTimestamp:2024-07-02 07:44:44.091163256 +0000 UTC m=+0.491635163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:44:44.703169 kubelet[1669]: E0702 07:44:44.703141 1669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Jul 2 07:44:44.802657 kubelet[1669]: I0702 07:44:44.802626 1669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:44.802952 kubelet[1669]: E0702 07:44:44.802937 1669 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 2 07:44:44.813257 kubelet[1669]: I0702 07:44:44.813202 1669 topology_manager.go:215] "Topology Admit Handler" podUID="bf7fb4f699e7efdb613894596e1ad032" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:44:44.815955 kubelet[1669]: I0702 07:44:44.815933 1669 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:44:44.816635 kubelet[1669]: I0702 07:44:44.816606 1669 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:44:44.821611 systemd[1]: Created slice kubepods-burstable-podbf7fb4f699e7efdb613894596e1ad032.slice. Jul 2 07:44:44.841788 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 07:44:44.849580 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 07:44:44.889605 kubelet[1669]: W0702 07:44:44.889480 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.889605 kubelet[1669]: E0702 07:44:44.889580 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.901824 kubelet[1669]: I0702 07:44:44.901795 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:44.901917 kubelet[1669]: I0702 07:44:44.901841 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:44.901917 kubelet[1669]: I0702 07:44:44.901873 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:44:44.901917 kubelet[1669]: I0702 07:44:44.901895 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:44.901917 kubelet[1669]: I0702 07:44:44.901916 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:44.902043 kubelet[1669]: I0702 07:44:44.901940 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:44.902043 kubelet[1669]: I0702 07:44:44.901962 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:44.902043 kubelet[1669]: I0702 07:44:44.902007 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:44.902043 kubelet[1669]: I0702 07:44:44.902032 1669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:44.955558 kubelet[1669]: W0702 07:44:44.955386 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:44.955558 kubelet[1669]: E0702 07:44:44.955451 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:45.021419 kubelet[1669]: W0702 07:44:45.021362 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:45.021419 kubelet[1669]: E0702 07:44:45.021404 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:45.140859 kubelet[1669]: E0702 07:44:45.140814 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:45.141531 env[1207]: time="2024-07-02T07:44:45.141456741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf7fb4f699e7efdb613894596e1ad032,Namespace:kube-system,Attempt:0,}" Jul 2 07:44:45.148631 kubelet[1669]: E0702 07:44:45.148611 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:45.149052 env[1207]: time="2024-07-02T07:44:45.149002572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 07:44:45.151337 kubelet[1669]: E0702 07:44:45.151307 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:45.151876 env[1207]: time="2024-07-02T07:44:45.151829463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 07:44:45.281866 kubelet[1669]: W0702 07:44:45.281715 1669 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:45.281866 kubelet[1669]: E0702 07:44:45.281791 1669 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:45.504529 kubelet[1669]: E0702 07:44:45.504473 1669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Jul 2 07:44:45.604707 kubelet[1669]: I0702 07:44:45.604646 1669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:45.604933 kubelet[1669]: E0702 07:44:45.604911 1669 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 2 07:44:46.099130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284859170.mount: Deactivated successfully. Jul 2 07:44:46.106167 env[1207]: time="2024-07-02T07:44:46.106119438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.107612 env[1207]: time="2024-07-02T07:44:46.107485640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.110688 env[1207]: time="2024-07-02T07:44:46.110652900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.112289 env[1207]: time="2024-07-02T07:44:46.112243092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.113930 env[1207]: time="2024-07-02T07:44:46.113881805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.115379 env[1207]: time="2024-07-02T07:44:46.115350970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.116234 env[1207]: time="2024-07-02T07:44:46.116209310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.119277 env[1207]: time="2024-07-02T07:44:46.119231518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.120572 env[1207]: time="2024-07-02T07:44:46.120537918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.123856 env[1207]: time="2024-07-02T07:44:46.123826906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.126913 env[1207]: time="2024-07-02T07:44:46.126873158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.132435 env[1207]: time="2024-07-02T07:44:46.132406986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:44:46.138537 env[1207]: time="2024-07-02T07:44:46.138463845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:44:46.138640 env[1207]: time="2024-07-02T07:44:46.138568612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:44:46.138640 env[1207]: time="2024-07-02T07:44:46.138587156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:44:46.138860 env[1207]: time="2024-07-02T07:44:46.138803502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5b9372cbf2e9f3d6d40a192b26c747662d94163357d3ac0bada064b91eb40fe pid=1711 runtime=io.containerd.runc.v2 Jul 2 07:44:46.156198 systemd[1]: Started cri-containerd-a5b9372cbf2e9f3d6d40a192b26c747662d94163357d3ac0bada064b91eb40fe.scope. Jul 2 07:44:46.158732 env[1207]: time="2024-07-02T07:44:46.158426362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:44:46.158732 env[1207]: time="2024-07-02T07:44:46.158457090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:44:46.158732 env[1207]: time="2024-07-02T07:44:46.158467770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:44:46.161385 env[1207]: time="2024-07-02T07:44:46.158726895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c3d8b9f0d6296957b5042f62e86c8e35cf8dd0561621d9f925e11e4cc87a66d pid=1739 runtime=io.containerd.runc.v2 Jul 2 07:44:46.163911 env[1207]: time="2024-07-02T07:44:46.163456435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:44:46.163911 env[1207]: time="2024-07-02T07:44:46.163528480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:44:46.163911 env[1207]: time="2024-07-02T07:44:46.163557655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:44:46.163911 env[1207]: time="2024-07-02T07:44:46.163697607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b7825cec6bcb10c770d0e350b3a02c134b1c6df0496015c13eee2f61bee85d3 pid=1758 runtime=io.containerd.runc.v2 Jul 2 07:44:46.174661 systemd[1]: Started cri-containerd-7c3d8b9f0d6296957b5042f62e86c8e35cf8dd0561621d9f925e11e4cc87a66d.scope. Jul 2 07:44:46.178883 systemd[1]: Started cri-containerd-5b7825cec6bcb10c770d0e350b3a02c134b1c6df0496015c13eee2f61bee85d3.scope. Jul 2 07:44:46.194150 kubelet[1669]: E0702 07:44:46.194105 1669 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Jul 2 07:44:46.197743 env[1207]: time="2024-07-02T07:44:46.197663165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b9372cbf2e9f3d6d40a192b26c747662d94163357d3ac0bada064b91eb40fe\"" Jul 2 07:44:46.198772 kubelet[1669]: E0702 07:44:46.198747 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:46.200688 env[1207]: time="2024-07-02T07:44:46.200649265Z" level=info msg="CreateContainer within sandbox \"a5b9372cbf2e9f3d6d40a192b26c747662d94163357d3ac0bada064b91eb40fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:44:46.214162 env[1207]: time="2024-07-02T07:44:46.214097515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c3d8b9f0d6296957b5042f62e86c8e35cf8dd0561621d9f925e11e4cc87a66d\"" Jul 2 07:44:46.216542 kubelet[1669]: E0702 07:44:46.215016 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:46.217218 env[1207]: time="2024-07-02T07:44:46.217177852Z" level=info msg="CreateContainer within sandbox \"7c3d8b9f0d6296957b5042f62e86c8e35cf8dd0561621d9f925e11e4cc87a66d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:44:46.217653 env[1207]: time="2024-07-02T07:44:46.217604652Z" level=info msg="CreateContainer within sandbox \"a5b9372cbf2e9f3d6d40a192b26c747662d94163357d3ac0bada064b91eb40fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"385d7b9419c81f83209c0e68e110ca9938ecee1968b37cc1f2b52bccfdb7787c\"" Jul 2 07:44:46.217993 env[1207]: time="2024-07-02T07:44:46.217956693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf7fb4f699e7efdb613894596e1ad032,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b7825cec6bcb10c770d0e350b3a02c134b1c6df0496015c13eee2f61bee85d3\"" Jul 2 07:44:46.218154 env[1207]: time="2024-07-02T07:44:46.218122343Z" level=info msg="StartContainer for \"385d7b9419c81f83209c0e68e110ca9938ecee1968b37cc1f2b52bccfdb7787c\"" Jul 2 07:44:46.218413 kubelet[1669]: E0702 07:44:46.218395 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:46.220590 env[1207]: time="2024-07-02T07:44:46.220482630Z" level=info msg="CreateContainer within sandbox \"5b7825cec6bcb10c770d0e350b3a02c134b1c6df0496015c13eee2f61bee85d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:44:46.234790 systemd[1]: Started cri-containerd-385d7b9419c81f83209c0e68e110ca9938ecee1968b37cc1f2b52bccfdb7787c.scope. Jul 2 07:44:46.238232 env[1207]: time="2024-07-02T07:44:46.238188434Z" level=info msg="CreateContainer within sandbox \"7c3d8b9f0d6296957b5042f62e86c8e35cf8dd0561621d9f925e11e4cc87a66d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2dc9f4ec71aedbe8e48881eb7d2adf6000eca4c3dd515ea1b1dfa619de85a60\"" Jul 2 07:44:46.238610 env[1207]: time="2024-07-02T07:44:46.238580660Z" level=info msg="StartContainer for \"c2dc9f4ec71aedbe8e48881eb7d2adf6000eca4c3dd515ea1b1dfa619de85a60\"" Jul 2 07:44:46.245911 env[1207]: time="2024-07-02T07:44:46.245857326Z" level=info msg="CreateContainer within sandbox \"5b7825cec6bcb10c770d0e350b3a02c134b1c6df0496015c13eee2f61bee85d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c3b1771efca31d57efe9e8f1b6d3175cf7b79395d8091523782ac182aea7b9e\"" Jul 2 07:44:46.246474 env[1207]: time="2024-07-02T07:44:46.246444718Z" level=info msg="StartContainer for \"8c3b1771efca31d57efe9e8f1b6d3175cf7b79395d8091523782ac182aea7b9e\"" Jul 2 07:44:46.252568 systemd[1]: Started cri-containerd-c2dc9f4ec71aedbe8e48881eb7d2adf6000eca4c3dd515ea1b1dfa619de85a60.scope. Jul 2 07:44:46.269685 systemd[1]: Started cri-containerd-8c3b1771efca31d57efe9e8f1b6d3175cf7b79395d8091523782ac182aea7b9e.scope. Jul 2 07:44:46.282954 env[1207]: time="2024-07-02T07:44:46.282900716Z" level=info msg="StartContainer for \"385d7b9419c81f83209c0e68e110ca9938ecee1968b37cc1f2b52bccfdb7787c\" returns successfully" Jul 2 07:44:46.302165 env[1207]: time="2024-07-02T07:44:46.302102446Z" level=info msg="StartContainer for \"c2dc9f4ec71aedbe8e48881eb7d2adf6000eca4c3dd515ea1b1dfa619de85a60\" returns successfully" Jul 2 07:44:46.323610 env[1207]: time="2024-07-02T07:44:46.323543325Z" level=info msg="StartContainer for \"8c3b1771efca31d57efe9e8f1b6d3175cf7b79395d8091523782ac182aea7b9e\" returns successfully" Jul 2 07:44:47.130418 kubelet[1669]: E0702 07:44:47.130382 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:47.131858 kubelet[1669]: E0702 07:44:47.131836 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:47.133075 kubelet[1669]: E0702 07:44:47.133053 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:47.206859 kubelet[1669]: I0702 07:44:47.206837 1669 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:47.251979 kubelet[1669]: E0702 07:44:47.251935 1669 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:44:47.348343 kubelet[1669]: I0702 07:44:47.348293 1669 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:44:47.354749 kubelet[1669]: E0702 07:44:47.354703 1669 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:44:47.455622 kubelet[1669]: E0702 07:44:47.455487 1669 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:44:47.556592 kubelet[1669]: E0702 07:44:47.556551 1669 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:44:47.657111 kubelet[1669]: E0702 07:44:47.657063 1669 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:44:48.075466 kubelet[1669]: I0702 07:44:48.075412 1669 apiserver.go:52] "Watching apiserver" Jul 2 07:44:48.098867 kubelet[1669]: I0702 07:44:48.098811 1669 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:44:48.138637 kubelet[1669]: E0702 07:44:48.138617 1669 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:48.139023 kubelet[1669]: E0702 07:44:48.138993 1669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:49.771280 systemd[1]: Reloading. Jul 2 07:44:49.839403 /usr/lib/systemd/system-generators/torcx-generator[1969]: time="2024-07-02T07:44:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:44:49.839427 /usr/lib/systemd/system-generators/torcx-generator[1969]: time="2024-07-02T07:44:49Z" level=info msg="torcx already run" Jul 2 07:44:49.896806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:44:49.896823 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:44:49.913534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:44:50.001089 kubelet[1669]: I0702 07:44:50.001040 1669 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:44:50.001176 systemd[1]: Stopping kubelet.service... Jul 2 07:44:50.016851 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:44:50.017002 systemd[1]: Stopped kubelet.service. Jul 2 07:44:50.018469 systemd[1]: Starting kubelet.service... Jul 2 07:44:50.091750 systemd[1]: Started kubelet.service. Jul 2 07:44:50.138617 kubelet[2014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:44:50.138617 kubelet[2014]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:44:50.138617 kubelet[2014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:44:50.138976 kubelet[2014]: I0702 07:44:50.138655 2014 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:44:50.143532 kubelet[2014]: I0702 07:44:50.143499 2014 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:44:50.143532 kubelet[2014]: I0702 07:44:50.143526 2014 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:44:50.143731 kubelet[2014]: I0702 07:44:50.143713 2014 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:44:50.145091 kubelet[2014]: I0702 07:44:50.145071 2014 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:44:50.146630 kubelet[2014]: I0702 07:44:50.146602 2014 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:44:50.154790 sudo[2031]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:44:50.154974 sudo[2031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:44:50.156171 kubelet[2014]: I0702 07:44:50.156138 2014 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:44:50.156399 kubelet[2014]: I0702 07:44:50.156373 2014 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:44:50.156648 kubelet[2014]: I0702 07:44:50.156593 2014 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:44:50.156648 kubelet[2014]: I0702 07:44:50.156627 2014 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:44:50.156648 kubelet[2014]: I0702 07:44:50.156641 2014 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:44:50.156795 kubelet[2014]: I0702 07:44:50.156670 2014 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:44:50.156795 kubelet[2014]: I0702 07:44:50.156789 2014 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:44:50.156840 kubelet[2014]: I0702 07:44:50.156805 2014 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:44:50.157547 kubelet[2014]: I0702 07:44:50.157526 2014 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:44:50.157582 kubelet[2014]: I0702 07:44:50.157554 2014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:44:50.158640 kubelet[2014]: I0702 07:44:50.158611 2014 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:44:50.158810 kubelet[2014]: I0702 07:44:50.158790 2014 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:44:50.160605 kubelet[2014]: I0702 07:44:50.160588 2014 server.go:1256] "Started kubelet" Jul 2 07:44:50.161705 kubelet[2014]: I0702 07:44:50.161688 2014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:44:50.162031 kubelet[2014]: I0702 07:44:50.162014 2014 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:44:50.162161 kubelet[2014]: I0702 07:44:50.162147 2014 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:44:50.162771 kubelet[2014]: I0702 07:44:50.162752 2014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:44:50.165565 kubelet[2014]: I0702 07:44:50.163863 2014 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:44:50.165946 kubelet[2014]: I0702 07:44:50.165927 2014 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:44:50.166009 kubelet[2014]: I0702 07:44:50.165996 2014 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:44:50.166099 kubelet[2014]: I0702 07:44:50.166086 2014 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:44:50.179846 kubelet[2014]: I0702 07:44:50.179529 2014 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:44:50.179846 kubelet[2014]: I0702 07:44:50.179552 2014 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:44:50.179846 kubelet[2014]: I0702 07:44:50.179606 2014 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:44:50.183139 kubelet[2014]: E0702 07:44:50.183121 2014 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:44:50.193069 kubelet[2014]: I0702 07:44:50.193027 2014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:44:50.195885 kubelet[2014]: I0702 07:44:50.195863 2014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:44:50.195955 kubelet[2014]: I0702 07:44:50.195891 2014 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:44:50.195955 kubelet[2014]: I0702 07:44:50.195909 2014 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:44:50.195955 kubelet[2014]: E0702 07:44:50.195947 2014 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:44:50.205113 kubelet[2014]: I0702 07:44:50.205093 2014 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:44:50.205258 kubelet[2014]: I0702 07:44:50.205244 2014 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:44:50.205334 kubelet[2014]: I0702 07:44:50.205321 2014 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:44:50.205544 kubelet[2014]: I0702 07:44:50.205532 2014 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:44:50.205647 kubelet[2014]: I0702 07:44:50.205633 2014 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:44:50.205715 kubelet[2014]: I0702 07:44:50.205702 2014 policy_none.go:49] "None policy: Start" Jul 2 07:44:50.206228 kubelet[2014]: I0702 07:44:50.206216 2014 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:44:50.206307 kubelet[2014]: I0702 07:44:50.206293 2014 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:44:50.206658 kubelet[2014]: I0702 07:44:50.206646 2014 state_mem.go:75] "Updated machine memory state" Jul 2 07:44:50.209716 kubelet[2014]: I0702 07:44:50.209705 2014 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:44:50.211794 kubelet[2014]: I0702 07:44:50.211782 2014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:44:50.268377 kubelet[2014]: I0702 07:44:50.268328 2014 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:44:50.273262 kubelet[2014]: I0702 07:44:50.273229 2014 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 07:44:50.273424 kubelet[2014]: I0702 07:44:50.273295 2014 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:44:50.296702 kubelet[2014]: I0702 07:44:50.296656 2014 topology_manager.go:215] "Topology Admit Handler" podUID="bf7fb4f699e7efdb613894596e1ad032" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:44:50.296885 kubelet[2014]: I0702 07:44:50.296765 2014 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:44:50.296885 kubelet[2014]: I0702 07:44:50.296808 2014 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:44:50.366886 kubelet[2014]: I0702 07:44:50.366775 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:50.366886 kubelet[2014]: I0702 07:44:50.366811 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:50.366886 kubelet[2014]: I0702 07:44:50.366832 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf7fb4f699e7efdb613894596e1ad032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf7fb4f699e7efdb613894596e1ad032\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:50.366886 kubelet[2014]: I0702 07:44:50.366851 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:50.366886 kubelet[2014]: I0702 07:44:50.366867 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:50.367117 kubelet[2014]: I0702 07:44:50.366887 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:50.367117 kubelet[2014]: I0702 07:44:50.366907 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:50.367117 kubelet[2014]: I0702 07:44:50.366924 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:44:50.367117 kubelet[2014]: I0702 07:44:50.366941 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:44:50.601286 kubelet[2014]: E0702 07:44:50.601249 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:50.601707 kubelet[2014]: E0702 07:44:50.601663 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:50.601868 kubelet[2014]: E0702 07:44:50.601751 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:50.618346 sudo[2031]: pam_unix(sudo:session): session closed for user root Jul 2 07:44:51.158459 kubelet[2014]: I0702 07:44:51.158429 2014 apiserver.go:52] "Watching apiserver" Jul 2 07:44:51.166638 kubelet[2014]: I0702 07:44:51.166610 2014 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:44:51.203331 kubelet[2014]: E0702 07:44:51.203302 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:51.203621 kubelet[2014]: E0702 07:44:51.203605 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:51.207091 kubelet[2014]: E0702 07:44:51.207058 2014 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:44:51.207411 kubelet[2014]: E0702 07:44:51.207397 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:51.216267 kubelet[2014]: I0702 07:44:51.216239 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.216190986 podStartE2EDuration="1.216190986s" podCreationTimestamp="2024-07-02 07:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:44:51.216152181 +0000 UTC m=+1.120667886" watchObservedRunningTime="2024-07-02 07:44:51.216190986 +0000 UTC m=+1.120706671" Jul 2 07:44:51.221117 kubelet[2014]: I0702 07:44:51.221060 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.221027758 podStartE2EDuration="1.221027758s" podCreationTimestamp="2024-07-02 07:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:44:51.220872981 +0000 UTC m=+1.125388666" watchObservedRunningTime="2024-07-02 07:44:51.221027758 +0000 UTC m=+1.125543453" Jul 2 07:44:51.225694 kubelet[2014]: I0702 07:44:51.225662 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.225642214 podStartE2EDuration="1.225642214s" podCreationTimestamp="2024-07-02 07:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:44:51.225529047 +0000 UTC m=+1.130044742" watchObservedRunningTime="2024-07-02 07:44:51.225642214 +0000 UTC m=+1.130157909" Jul 2 07:44:51.978012 sudo[1301]: pam_unix(sudo:session): session closed for user root Jul 2 07:44:51.979149 sshd[1298]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:51.981459 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:45128.service: Deactivated successfully. Jul 2 07:44:51.982102 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:44:51.982232 systemd[1]: session-5.scope: Consumed 3.985s CPU time. Jul 2 07:44:51.982723 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:44:51.983372 systemd-logind[1191]: Removed session 5. Jul 2 07:44:52.204264 kubelet[2014]: E0702 07:44:52.204231 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:53.206366 kubelet[2014]: E0702 07:44:53.206330 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:54.764105 kubelet[2014]: E0702 07:44:54.764065 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:55.503784 kubelet[2014]: E0702 07:44:55.503740 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:00.260854 kubelet[2014]: E0702 07:45:00.260825 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:00.303501 update_engine[1198]: I0702 07:45:00.303444 1198 update_attempter.cc:509] Updating boot flags... Jul 2 07:45:01.218575 kubelet[2014]: E0702 07:45:01.218537 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:02.219295 kubelet[2014]: E0702 07:45:02.219253 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:03.931825 kubelet[2014]: I0702 07:45:03.931784 2014 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:45:03.932203 env[1207]: time="2024-07-02T07:45:03.932067464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:45:03.932441 kubelet[2014]: I0702 07:45:03.932206 2014 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:45:04.768070 kubelet[2014]: E0702 07:45:04.768035 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.048487 kubelet[2014]: I0702 07:45:05.048371 2014 topology_manager.go:215] "Topology Admit Handler" podUID="4f2a5324-0002-4a04-878b-14cfb85b4b32" podNamespace="kube-system" podName="kube-proxy-sc6vh" Jul 2 07:45:05.052552 systemd[1]: Created slice kubepods-besteffort-pod4f2a5324_0002_4a04_878b_14cfb85b4b32.slice. Jul 2 07:45:05.053441 kubelet[2014]: I0702 07:45:05.053411 2014 topology_manager.go:215] "Topology Admit Handler" podUID="955ebbac-3681-434b-9b33-486b48420698" podNamespace="kube-system" podName="cilium-t2q9n" Jul 2 07:45:05.054598 kubelet[2014]: W0702 07:45:05.054573 2014 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:45:05.054659 kubelet[2014]: E0702 07:45:05.054601 2014 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:45:05.065174 kubelet[2014]: I0702 07:45:05.065135 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f2a5324-0002-4a04-878b-14cfb85b4b32-lib-modules\") pod \"kube-proxy-sc6vh\" (UID: \"4f2a5324-0002-4a04-878b-14cfb85b4b32\") " pod="kube-system/kube-proxy-sc6vh" Jul 2 07:45:05.065174 kubelet[2014]: I0702 07:45:05.065182 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-hostproc\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065198 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cni-path\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065213 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-etc-cni-netd\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065239 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-hubble-tls\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065255 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-run\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065272 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-net\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065348 kubelet[2014]: I0702 07:45:05.065296 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-xtables-lock\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065316 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955ebbac-3681-434b-9b33-486b48420698-clustermesh-secrets\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065337 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2cnp\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-kube-api-access-x2cnp\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065353 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-bpf-maps\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065372 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-lib-modules\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065390 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955ebbac-3681-434b-9b33-486b48420698-cilium-config-path\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065491 kubelet[2014]: I0702 07:45:05.065406 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f2a5324-0002-4a04-878b-14cfb85b4b32-xtables-lock\") pod \"kube-proxy-sc6vh\" (UID: \"4f2a5324-0002-4a04-878b-14cfb85b4b32\") " pod="kube-system/kube-proxy-sc6vh" Jul 2 07:45:05.065680 kubelet[2014]: I0702 07:45:05.065427 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtcvt\" (UniqueName: \"kubernetes.io/projected/4f2a5324-0002-4a04-878b-14cfb85b4b32-kube-api-access-jtcvt\") pod \"kube-proxy-sc6vh\" (UID: \"4f2a5324-0002-4a04-878b-14cfb85b4b32\") " pod="kube-system/kube-proxy-sc6vh" Jul 2 07:45:05.065680 kubelet[2014]: I0702 07:45:05.065444 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-kernel\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065680 kubelet[2014]: I0702 07:45:05.065461 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-cgroup\") pod \"cilium-t2q9n\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " pod="kube-system/cilium-t2q9n" Jul 2 07:45:05.065680 kubelet[2014]: I0702 07:45:05.065480 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f2a5324-0002-4a04-878b-14cfb85b4b32-kube-proxy\") pod \"kube-proxy-sc6vh\" (UID: \"4f2a5324-0002-4a04-878b-14cfb85b4b32\") " pod="kube-system/kube-proxy-sc6vh" Jul 2 07:45:05.070762 systemd[1]: Created slice kubepods-burstable-pod955ebbac_3681_434b_9b33_486b48420698.slice. Jul 2 07:45:05.079087 kubelet[2014]: I0702 07:45:05.079045 2014 topology_manager.go:215] "Topology Admit Handler" podUID="d997b1ee-7b07-4b6c-8ba8-f294efdff825" podNamespace="kube-system" podName="cilium-operator-5cc964979-ls9sz" Jul 2 07:45:05.083879 systemd[1]: Created slice kubepods-besteffort-podd997b1ee_7b07_4b6c_8ba8_f294efdff825.slice. Jul 2 07:45:05.166499 kubelet[2014]: I0702 07:45:05.166460 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d997b1ee-7b07-4b6c-8ba8-f294efdff825-cilium-config-path\") pod \"cilium-operator-5cc964979-ls9sz\" (UID: \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\") " pod="kube-system/cilium-operator-5cc964979-ls9sz" Jul 2 07:45:05.166681 kubelet[2014]: I0702 07:45:05.166595 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvz98\" (UniqueName: \"kubernetes.io/projected/d997b1ee-7b07-4b6c-8ba8-f294efdff825-kube-api-access-cvz98\") pod \"cilium-operator-5cc964979-ls9sz\" (UID: \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\") " pod="kube-system/cilium-operator-5cc964979-ls9sz" Jul 2 07:45:05.373527 kubelet[2014]: E0702 07:45:05.373424 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.374030 env[1207]: time="2024-07-02T07:45:05.373982852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2q9n,Uid:955ebbac-3681-434b-9b33-486b48420698,Namespace:kube-system,Attempt:0,}" Jul 2 07:45:05.386670 kubelet[2014]: E0702 07:45:05.386635 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.387022 env[1207]: time="2024-07-02T07:45:05.386967212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ls9sz,Uid:d997b1ee-7b07-4b6c-8ba8-f294efdff825,Namespace:kube-system,Attempt:0,}" Jul 2 07:45:05.390704 env[1207]: time="2024-07-02T07:45:05.390644150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:45:05.390704 env[1207]: time="2024-07-02T07:45:05.390683173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:45:05.390704 env[1207]: time="2024-07-02T07:45:05.390693142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:45:05.390966 env[1207]: time="2024-07-02T07:45:05.390861771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e pid=2127 runtime=io.containerd.runc.v2 Jul 2 07:45:05.402852 systemd[1]: Started cri-containerd-d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e.scope. Jul 2 07:45:05.413732 env[1207]: time="2024-07-02T07:45:05.403020128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:45:05.413732 env[1207]: time="2024-07-02T07:45:05.403114657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:45:05.413732 env[1207]: time="2024-07-02T07:45:05.403137731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:45:05.413732 env[1207]: time="2024-07-02T07:45:05.403295419Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e pid=2156 runtime=io.containerd.runc.v2 Jul 2 07:45:05.415458 systemd[1]: Started cri-containerd-9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e.scope. Jul 2 07:45:05.427425 env[1207]: time="2024-07-02T07:45:05.427386336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2q9n,Uid:955ebbac-3681-434b-9b33-486b48420698,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\"" Jul 2 07:45:05.428047 kubelet[2014]: E0702 07:45:05.428014 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.438460 env[1207]: time="2024-07-02T07:45:05.438413652Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:45:05.450628 env[1207]: time="2024-07-02T07:45:05.450583230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ls9sz,Uid:d997b1ee-7b07-4b6c-8ba8-f294efdff825,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\"" Jul 2 07:45:05.451344 kubelet[2014]: E0702 07:45:05.451150 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.507904 kubelet[2014]: E0702 07:45:05.507879 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.967379 kubelet[2014]: E0702 07:45:05.967327 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:05.967850 env[1207]: time="2024-07-02T07:45:05.967809649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sc6vh,Uid:4f2a5324-0002-4a04-878b-14cfb85b4b32,Namespace:kube-system,Attempt:0,}" Jul 2 07:45:05.980979 env[1207]: time="2024-07-02T07:45:05.980922462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:45:05.980979 env[1207]: time="2024-07-02T07:45:05.980962387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:45:05.980979 env[1207]: time="2024-07-02T07:45:05.980972667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:45:05.981164 env[1207]: time="2024-07-02T07:45:05.981121067Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35e61d4647170f93063d0d449c52fc25de1ad6f98c7c080dffcaacb716a38d3d pid=2209 runtime=io.containerd.runc.v2 Jul 2 07:45:05.990827 systemd[1]: Started cri-containerd-35e61d4647170f93063d0d449c52fc25de1ad6f98c7c080dffcaacb716a38d3d.scope. Jul 2 07:45:06.008591 env[1207]: time="2024-07-02T07:45:06.008553999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sc6vh,Uid:4f2a5324-0002-4a04-878b-14cfb85b4b32,Namespace:kube-system,Attempt:0,} returns sandbox id \"35e61d4647170f93063d0d449c52fc25de1ad6f98c7c080dffcaacb716a38d3d\"" Jul 2 07:45:06.009250 kubelet[2014]: E0702 07:45:06.009227 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:06.010891 env[1207]: time="2024-07-02T07:45:06.010858608Z" level=info msg="CreateContainer within sandbox \"35e61d4647170f93063d0d449c52fc25de1ad6f98c7c080dffcaacb716a38d3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:45:06.025585 env[1207]: time="2024-07-02T07:45:06.025530856Z" level=info msg="CreateContainer within sandbox \"35e61d4647170f93063d0d449c52fc25de1ad6f98c7c080dffcaacb716a38d3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02e45a981c6863770c9e911097189d8a4c803454d327ac99ab62f1fa7a7a0ba7\"" Jul 2 07:45:06.025999 env[1207]: time="2024-07-02T07:45:06.025974195Z" level=info msg="StartContainer for \"02e45a981c6863770c9e911097189d8a4c803454d327ac99ab62f1fa7a7a0ba7\"" Jul 2 07:45:06.039174 systemd[1]: Started cri-containerd-02e45a981c6863770c9e911097189d8a4c803454d327ac99ab62f1fa7a7a0ba7.scope. Jul 2 07:45:06.064393 env[1207]: time="2024-07-02T07:45:06.064351674Z" level=info msg="StartContainer for \"02e45a981c6863770c9e911097189d8a4c803454d327ac99ab62f1fa7a7a0ba7\" returns successfully" Jul 2 07:45:06.226606 kubelet[2014]: E0702 07:45:06.225954 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:06.232268 kubelet[2014]: I0702 07:45:06.232230 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sc6vh" podStartSLOduration=2.232189319 podStartE2EDuration="2.232189319s" podCreationTimestamp="2024-07-02 07:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:45:06.231823447 +0000 UTC m=+16.136339142" watchObservedRunningTime="2024-07-02 07:45:06.232189319 +0000 UTC m=+16.136705045" Jul 2 07:45:13.661647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094656762.mount: Deactivated successfully. Jul 2 07:45:16.801913 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:48796.service. Jul 2 07:45:16.847022 sshd[2400]: Accepted publickey for core from 10.0.0.1 port 48796 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:16.848607 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:16.851983 systemd-logind[1191]: New session 6 of user core. Jul 2 07:45:16.852712 systemd[1]: Started session-6.scope. Jul 2 07:45:16.964800 sshd[2400]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:16.967033 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:48796.service: Deactivated successfully. Jul 2 07:45:16.967685 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:45:16.968487 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:45:16.969168 systemd-logind[1191]: Removed session 6. Jul 2 07:45:18.664268 env[1207]: time="2024-07-02T07:45:18.664217399Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:18.665853 env[1207]: time="2024-07-02T07:45:18.665827510Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:18.667357 env[1207]: time="2024-07-02T07:45:18.667317706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:18.667775 env[1207]: time="2024-07-02T07:45:18.667737767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:45:18.674370 env[1207]: time="2024-07-02T07:45:18.673587931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:45:18.674370 env[1207]: time="2024-07-02T07:45:18.673640329Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:45:18.684157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828434398.mount: Deactivated successfully. Jul 2 07:45:18.685037 env[1207]: time="2024-07-02T07:45:18.684976711Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\"" Jul 2 07:45:18.685374 env[1207]: time="2024-07-02T07:45:18.685353942Z" level=info msg="StartContainer for \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\"" Jul 2 07:45:18.702235 systemd[1]: Started cri-containerd-792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2.scope. Jul 2 07:45:18.731224 systemd[1]: cri-containerd-792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2.scope: Deactivated successfully. Jul 2 07:45:18.984484 env[1207]: time="2024-07-02T07:45:18.984412558Z" level=info msg="StartContainer for \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\" returns successfully" Jul 2 07:45:19.114222 env[1207]: time="2024-07-02T07:45:19.114179735Z" level=info msg="shim disconnected" id=792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2 Jul 2 07:45:19.114456 env[1207]: time="2024-07-02T07:45:19.114426318Z" level=warning msg="cleaning up after shim disconnected" id=792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2 namespace=k8s.io Jul 2 07:45:19.114456 env[1207]: time="2024-07-02T07:45:19.114449262Z" level=info msg="cleaning up dead shim" Jul 2 07:45:19.120312 env[1207]: time="2024-07-02T07:45:19.120291598Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" Jul 2 07:45:19.245834 kubelet[2014]: E0702 07:45:19.245555 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:19.247092 env[1207]: time="2024-07-02T07:45:19.247054802Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:45:19.262923 env[1207]: time="2024-07-02T07:45:19.262860071Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\"" Jul 2 07:45:19.263369 env[1207]: time="2024-07-02T07:45:19.263331508Z" level=info msg="StartContainer for \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\"" Jul 2 07:45:19.276532 systemd[1]: Started cri-containerd-7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1.scope. Jul 2 07:45:19.303419 env[1207]: time="2024-07-02T07:45:19.303372865Z" level=info msg="StartContainer for \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\" returns successfully" Jul 2 07:45:19.311881 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:45:19.312149 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:45:19.312318 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:45:19.313760 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:45:19.314039 systemd[1]: cri-containerd-7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1.scope: Deactivated successfully. Jul 2 07:45:19.320911 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:45:19.330071 env[1207]: time="2024-07-02T07:45:19.330033035Z" level=info msg="shim disconnected" id=7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1 Jul 2 07:45:19.330071 env[1207]: time="2024-07-02T07:45:19.330072088Z" level=warning msg="cleaning up after shim disconnected" id=7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1 namespace=k8s.io Jul 2 07:45:19.330257 env[1207]: time="2024-07-02T07:45:19.330079712Z" level=info msg="cleaning up dead shim" Jul 2 07:45:19.335798 env[1207]: time="2024-07-02T07:45:19.335751969Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2522 runtime=io.containerd.runc.v2\n" Jul 2 07:45:19.682376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2-rootfs.mount: Deactivated successfully. Jul 2 07:45:20.055048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088425074.mount: Deactivated successfully. Jul 2 07:45:20.247963 kubelet[2014]: E0702 07:45:20.247934 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:20.264758 env[1207]: time="2024-07-02T07:45:20.264714033Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:45:20.276243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371889884.mount: Deactivated successfully. Jul 2 07:45:20.282764 env[1207]: time="2024-07-02T07:45:20.282719045Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\"" Jul 2 07:45:20.284359 env[1207]: time="2024-07-02T07:45:20.283311430Z" level=info msg="StartContainer for \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\"" Jul 2 07:45:20.299575 systemd[1]: Started cri-containerd-7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209.scope. Jul 2 07:45:20.327418 env[1207]: time="2024-07-02T07:45:20.327178498Z" level=info msg="StartContainer for \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\" returns successfully" Jul 2 07:45:20.327879 systemd[1]: cri-containerd-7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209.scope: Deactivated successfully. Jul 2 07:45:20.429840 env[1207]: time="2024-07-02T07:45:20.429786453Z" level=info msg="shim disconnected" id=7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209 Jul 2 07:45:20.430123 env[1207]: time="2024-07-02T07:45:20.430081398Z" level=warning msg="cleaning up after shim disconnected" id=7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209 namespace=k8s.io Jul 2 07:45:20.430123 env[1207]: time="2024-07-02T07:45:20.430106846Z" level=info msg="cleaning up dead shim" Jul 2 07:45:20.436078 env[1207]: time="2024-07-02T07:45:20.436022197Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Jul 2 07:45:20.663773 env[1207]: time="2024-07-02T07:45:20.663680540Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:20.665485 env[1207]: time="2024-07-02T07:45:20.665444379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:20.666979 env[1207]: time="2024-07-02T07:45:20.666946376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:45:20.667540 env[1207]: time="2024-07-02T07:45:20.667481132Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:45:20.669358 env[1207]: time="2024-07-02T07:45:20.669333648Z" level=info msg="CreateContainer within sandbox \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:45:20.680835 env[1207]: time="2024-07-02T07:45:20.680767825Z" level=info msg="CreateContainer within sandbox \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\"" Jul 2 07:45:20.682059 env[1207]: time="2024-07-02T07:45:20.681476247Z" level=info msg="StartContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\"" Jul 2 07:45:20.698940 systemd[1]: Started cri-containerd-4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32.scope. Jul 2 07:45:20.720897 env[1207]: time="2024-07-02T07:45:20.720850702Z" level=info msg="StartContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" returns successfully" Jul 2 07:45:21.254792 kubelet[2014]: E0702 07:45:21.254755 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:21.256879 kubelet[2014]: E0702 07:45:21.256848 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:21.258482 env[1207]: time="2024-07-02T07:45:21.258438459Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:45:21.376641 env[1207]: time="2024-07-02T07:45:21.376592872Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\"" Jul 2 07:45:21.377026 env[1207]: time="2024-07-02T07:45:21.376984488Z" level=info msg="StartContainer for \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\"" Jul 2 07:45:21.389667 kubelet[2014]: I0702 07:45:21.389630 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ls9sz" podStartSLOduration=1.173454128 podStartE2EDuration="16.389597679s" podCreationTimestamp="2024-07-02 07:45:05 +0000 UTC" firstStartedPulling="2024-07-02 07:45:05.45159758 +0000 UTC m=+15.356113275" lastFinishedPulling="2024-07-02 07:45:20.667741131 +0000 UTC m=+30.572256826" observedRunningTime="2024-07-02 07:45:21.388999824 +0000 UTC m=+31.293515509" watchObservedRunningTime="2024-07-02 07:45:21.389597679 +0000 UTC m=+31.294113364" Jul 2 07:45:21.400633 systemd[1]: Started cri-containerd-9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc.scope. Jul 2 07:45:21.423843 systemd[1]: cri-containerd-9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc.scope: Deactivated successfully. Jul 2 07:45:21.425397 env[1207]: time="2024-07-02T07:45:21.425366624Z" level=info msg="StartContainer for \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\" returns successfully" Jul 2 07:45:21.447302 env[1207]: time="2024-07-02T07:45:21.447242050Z" level=info msg="shim disconnected" id=9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc Jul 2 07:45:21.447302 env[1207]: time="2024-07-02T07:45:21.447294017Z" level=warning msg="cleaning up after shim disconnected" id=9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc namespace=k8s.io Jul 2 07:45:21.447302 env[1207]: time="2024-07-02T07:45:21.447303926Z" level=info msg="cleaning up dead shim" Jul 2 07:45:21.467660 env[1207]: time="2024-07-02T07:45:21.467597065Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2670 runtime=io.containerd.runc.v2\n" Jul 2 07:45:21.682317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2623144150.mount: Deactivated successfully. Jul 2 07:45:21.968774 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:48810.service. Jul 2 07:45:22.011687 sshd[2683]: Accepted publickey for core from 10.0.0.1 port 48810 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:22.012857 sshd[2683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:22.015987 systemd-logind[1191]: New session 7 of user core. Jul 2 07:45:22.016798 systemd[1]: Started session-7.scope. Jul 2 07:45:22.111034 sshd[2683]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:22.113265 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:48810.service: Deactivated successfully. Jul 2 07:45:22.113926 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:45:22.114406 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:45:22.115064 systemd-logind[1191]: Removed session 7. Jul 2 07:45:22.260909 kubelet[2014]: E0702 07:45:22.260792 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:22.261251 kubelet[2014]: E0702 07:45:22.260928 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:22.263413 env[1207]: time="2024-07-02T07:45:22.263356159Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:45:22.285106 env[1207]: time="2024-07-02T07:45:22.285035819Z" level=info msg="CreateContainer within sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\"" Jul 2 07:45:22.285695 env[1207]: time="2024-07-02T07:45:22.285652578Z" level=info msg="StartContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\"" Jul 2 07:45:22.305914 systemd[1]: Started cri-containerd-dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9.scope. Jul 2 07:45:22.333945 env[1207]: time="2024-07-02T07:45:22.333894492Z" level=info msg="StartContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" returns successfully" Jul 2 07:45:22.412534 kubelet[2014]: I0702 07:45:22.412489 2014 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:45:22.431484 kubelet[2014]: I0702 07:45:22.431449 2014 topology_manager.go:215] "Topology Admit Handler" podUID="247a82cf-9b2d-41b0-b530-83fe8294d486" podNamespace="kube-system" podName="coredns-76f75df574-nkbzm" Jul 2 07:45:22.436218 systemd[1]: Created slice kubepods-burstable-pod247a82cf_9b2d_41b0_b530_83fe8294d486.slice. Jul 2 07:45:22.440670 kubelet[2014]: I0702 07:45:22.440642 2014 topology_manager.go:215] "Topology Admit Handler" podUID="55fac083-e7db-45b0-a25f-e9f81230b187" podNamespace="kube-system" podName="coredns-76f75df574-8kfkb" Jul 2 07:45:22.445120 systemd[1]: Created slice kubepods-burstable-pod55fac083_e7db_45b0_a25f_e9f81230b187.slice. Jul 2 07:45:22.491988 kubelet[2014]: I0702 07:45:22.491943 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/247a82cf-9b2d-41b0-b530-83fe8294d486-config-volume\") pod \"coredns-76f75df574-nkbzm\" (UID: \"247a82cf-9b2d-41b0-b530-83fe8294d486\") " pod="kube-system/coredns-76f75df574-nkbzm" Jul 2 07:45:22.491988 kubelet[2014]: I0702 07:45:22.491986 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994js\" (UniqueName: \"kubernetes.io/projected/55fac083-e7db-45b0-a25f-e9f81230b187-kube-api-access-994js\") pod \"coredns-76f75df574-8kfkb\" (UID: \"55fac083-e7db-45b0-a25f-e9f81230b187\") " pod="kube-system/coredns-76f75df574-8kfkb" Jul 2 07:45:22.492170 kubelet[2014]: I0702 07:45:22.492006 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s7f\" (UniqueName: \"kubernetes.io/projected/247a82cf-9b2d-41b0-b530-83fe8294d486-kube-api-access-m4s7f\") pod \"coredns-76f75df574-nkbzm\" (UID: \"247a82cf-9b2d-41b0-b530-83fe8294d486\") " pod="kube-system/coredns-76f75df574-nkbzm" Jul 2 07:45:22.492170 kubelet[2014]: I0702 07:45:22.492023 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55fac083-e7db-45b0-a25f-e9f81230b187-config-volume\") pod \"coredns-76f75df574-8kfkb\" (UID: \"55fac083-e7db-45b0-a25f-e9f81230b187\") " pod="kube-system/coredns-76f75df574-8kfkb" Jul 2 07:45:22.739677 kubelet[2014]: E0702 07:45:22.739643 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:22.740223 env[1207]: time="2024-07-02T07:45:22.740191416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nkbzm,Uid:247a82cf-9b2d-41b0-b530-83fe8294d486,Namespace:kube-system,Attempt:0,}" Jul 2 07:45:22.749070 kubelet[2014]: E0702 07:45:22.749037 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:22.749440 env[1207]: time="2024-07-02T07:45:22.749413122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8kfkb,Uid:55fac083-e7db-45b0-a25f-e9f81230b187,Namespace:kube-system,Attempt:0,}" Jul 2 07:45:23.264078 kubelet[2014]: E0702 07:45:23.264055 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:23.276644 kubelet[2014]: I0702 07:45:23.274974 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t2q9n" podStartSLOduration=6.043691194 podStartE2EDuration="19.274935803s" podCreationTimestamp="2024-07-02 07:45:04 +0000 UTC" firstStartedPulling="2024-07-02 07:45:05.437550549 +0000 UTC m=+15.342066254" lastFinishedPulling="2024-07-02 07:45:18.668795168 +0000 UTC m=+28.573310863" observedRunningTime="2024-07-02 07:45:23.27449801 +0000 UTC m=+33.179013705" watchObservedRunningTime="2024-07-02 07:45:23.274935803 +0000 UTC m=+33.179451498" Jul 2 07:45:24.243398 systemd-networkd[1022]: cilium_host: Link UP Jul 2 07:45:24.246696 systemd-networkd[1022]: cilium_net: Link UP Jul 2 07:45:24.246970 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 07:45:24.247022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:45:24.247144 systemd-networkd[1022]: cilium_net: Gained carrier Jul 2 07:45:24.247336 systemd-networkd[1022]: cilium_host: Gained carrier Jul 2 07:45:24.266181 kubelet[2014]: E0702 07:45:24.266086 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:24.281586 systemd-networkd[1022]: cilium_net: Gained IPv6LL Jul 2 07:45:24.317018 systemd-networkd[1022]: cilium_vxlan: Link UP Jul 2 07:45:24.317028 systemd-networkd[1022]: cilium_vxlan: Gained carrier Jul 2 07:45:24.496538 kernel: NET: Registered PF_ALG protocol family Jul 2 07:45:25.020615 systemd-networkd[1022]: lxc_health: Link UP Jul 2 07:45:25.028320 systemd-networkd[1022]: lxc_health: Gained carrier Jul 2 07:45:25.028528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:45:25.106639 systemd-networkd[1022]: cilium_host: Gained IPv6LL Jul 2 07:45:25.267532 kubelet[2014]: E0702 07:45:25.267491 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:25.530544 systemd-networkd[1022]: lxcf0c3cd67945e: Link UP Jul 2 07:45:25.536550 kernel: eth0: renamed from tmpe28ec Jul 2 07:45:25.547332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:45:25.547418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf0c3cd67945e: link becomes ready Jul 2 07:45:25.548081 systemd-networkd[1022]: lxcf0c3cd67945e: Gained carrier Jul 2 07:45:25.549115 systemd-networkd[1022]: lxc954566a3a5b7: Link UP Jul 2 07:45:25.561529 kernel: eth0: renamed from tmpb1047 Jul 2 07:45:25.569993 systemd-networkd[1022]: lxc954566a3a5b7: Gained carrier Jul 2 07:45:25.570537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc954566a3a5b7: link becomes ready Jul 2 07:45:26.269485 kubelet[2014]: E0702 07:45:26.269454 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:26.322715 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Jul 2 07:45:26.770672 systemd-networkd[1022]: lxcf0c3cd67945e: Gained IPv6LL Jul 2 07:45:26.962633 systemd-networkd[1022]: lxc_health: Gained IPv6LL Jul 2 07:45:27.115116 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:44332.service. Jul 2 07:45:27.158436 sshd[3242]: Accepted publickey for core from 10.0.0.1 port 44332 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:27.160274 sshd[3242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:27.163430 systemd-logind[1191]: New session 8 of user core. Jul 2 07:45:27.164191 systemd[1]: Started session-8.scope. Jul 2 07:45:27.265913 sshd[3242]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:27.268392 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:44332.service: Deactivated successfully. Jul 2 07:45:27.269045 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:45:27.269572 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:45:27.270464 systemd-logind[1191]: Removed session 8. Jul 2 07:45:27.271246 kubelet[2014]: E0702 07:45:27.271224 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:27.474667 systemd-networkd[1022]: lxc954566a3a5b7: Gained IPv6LL Jul 2 07:45:28.905768 env[1207]: time="2024-07-02T07:45:28.905669923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:45:28.905768 env[1207]: time="2024-07-02T07:45:28.905719818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:45:28.905768 env[1207]: time="2024-07-02T07:45:28.905730247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:45:28.906240 env[1207]: time="2024-07-02T07:45:28.905871702Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1047fc4e8e226d70de86d9a510f12d7c7a4c571e83cfc59a7faa35023954a7e pid=3273 runtime=io.containerd.runc.v2 Jul 2 07:45:28.915920 env[1207]: time="2024-07-02T07:45:28.915854603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:45:28.915920 env[1207]: time="2024-07-02T07:45:28.915882205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:45:28.915920 env[1207]: time="2024-07-02T07:45:28.915902533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:45:28.916146 env[1207]: time="2024-07-02T07:45:28.916081348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e28ec13b05b86d869e977aaccb47ba277cda8712fe9dea9672e338269c196041 pid=3295 runtime=io.containerd.runc.v2 Jul 2 07:45:28.923222 systemd[1]: Started cri-containerd-b1047fc4e8e226d70de86d9a510f12d7c7a4c571e83cfc59a7faa35023954a7e.scope. Jul 2 07:45:28.937867 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:45:28.939028 systemd[1]: Started cri-containerd-e28ec13b05b86d869e977aaccb47ba277cda8712fe9dea9672e338269c196041.scope. Jul 2 07:45:28.951993 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:45:28.962449 env[1207]: time="2024-07-02T07:45:28.962415859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nkbzm,Uid:247a82cf-9b2d-41b0-b530-83fe8294d486,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1047fc4e8e226d70de86d9a510f12d7c7a4c571e83cfc59a7faa35023954a7e\"" Jul 2 07:45:28.963526 kubelet[2014]: E0702 07:45:28.963101 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:28.965872 env[1207]: time="2024-07-02T07:45:28.965846306Z" level=info msg="CreateContainer within sandbox \"b1047fc4e8e226d70de86d9a510f12d7c7a4c571e83cfc59a7faa35023954a7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:45:28.981822 env[1207]: time="2024-07-02T07:45:28.981772966Z" level=info msg="CreateContainer within sandbox \"b1047fc4e8e226d70de86d9a510f12d7c7a4c571e83cfc59a7faa35023954a7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"463adf65af82947312c02fbc1ca6793b47e7743557cbcc1749dbd973a5ceab89\"" Jul 2 07:45:28.982562 env[1207]: time="2024-07-02T07:45:28.982544706Z" level=info msg="StartContainer for \"463adf65af82947312c02fbc1ca6793b47e7743557cbcc1749dbd973a5ceab89\"" Jul 2 07:45:28.985362 env[1207]: time="2024-07-02T07:45:28.985328196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8kfkb,Uid:55fac083-e7db-45b0-a25f-e9f81230b187,Namespace:kube-system,Attempt:0,} returns sandbox id \"e28ec13b05b86d869e977aaccb47ba277cda8712fe9dea9672e338269c196041\"" Jul 2 07:45:28.986284 kubelet[2014]: E0702 07:45:28.985931 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:28.987862 env[1207]: time="2024-07-02T07:45:28.987842341Z" level=info msg="CreateContainer within sandbox \"e28ec13b05b86d869e977aaccb47ba277cda8712fe9dea9672e338269c196041\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:45:29.000438 env[1207]: time="2024-07-02T07:45:29.000387626Z" level=info msg="CreateContainer within sandbox \"e28ec13b05b86d869e977aaccb47ba277cda8712fe9dea9672e338269c196041\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f367fe19a740a2cf5ebe03b95d2c97717d8c0262d046c33c56dbb76823b2115b\"" Jul 2 07:45:29.001326 systemd[1]: Started cri-containerd-463adf65af82947312c02fbc1ca6793b47e7743557cbcc1749dbd973a5ceab89.scope. Jul 2 07:45:29.001887 env[1207]: time="2024-07-02T07:45:29.001859892Z" level=info msg="StartContainer for \"f367fe19a740a2cf5ebe03b95d2c97717d8c0262d046c33c56dbb76823b2115b\"" Jul 2 07:45:29.025370 systemd[1]: Started cri-containerd-f367fe19a740a2cf5ebe03b95d2c97717d8c0262d046c33c56dbb76823b2115b.scope. Jul 2 07:45:29.030821 env[1207]: time="2024-07-02T07:45:29.030769381Z" level=info msg="StartContainer for \"463adf65af82947312c02fbc1ca6793b47e7743557cbcc1749dbd973a5ceab89\" returns successfully" Jul 2 07:45:29.055643 env[1207]: time="2024-07-02T07:45:29.055597953Z" level=info msg="StartContainer for \"f367fe19a740a2cf5ebe03b95d2c97717d8c0262d046c33c56dbb76823b2115b\" returns successfully" Jul 2 07:45:29.274875 kubelet[2014]: E0702 07:45:29.274846 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:29.276081 kubelet[2014]: E0702 07:45:29.275996 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:29.408597 kubelet[2014]: I0702 07:45:29.408560 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8kfkb" podStartSLOduration=24.408493176 podStartE2EDuration="24.408493176s" podCreationTimestamp="2024-07-02 07:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:45:29.31378827 +0000 UTC m=+39.218303965" watchObservedRunningTime="2024-07-02 07:45:29.408493176 +0000 UTC m=+39.313008871" Jul 2 07:45:29.479440 kubelet[2014]: I0702 07:45:29.479403 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nkbzm" podStartSLOduration=24.479356023 podStartE2EDuration="24.479356023s" podCreationTimestamp="2024-07-02 07:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:45:29.408752643 +0000 UTC m=+39.313268338" watchObservedRunningTime="2024-07-02 07:45:29.479356023 +0000 UTC m=+39.383871718" Jul 2 07:45:29.909981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202209173.mount: Deactivated successfully. Jul 2 07:45:30.278106 kubelet[2014]: E0702 07:45:30.278068 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:30.278587 kubelet[2014]: E0702 07:45:30.278556 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:31.280030 kubelet[2014]: E0702 07:45:31.279988 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:31.280353 kubelet[2014]: E0702 07:45:31.280151 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:45:32.270427 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:44346.service. Jul 2 07:45:32.314222 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 44346 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:32.315436 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:32.318710 systemd-logind[1191]: New session 9 of user core. Jul 2 07:45:32.319449 systemd[1]: Started session-9.scope. Jul 2 07:45:32.419606 sshd[3436]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:32.421626 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:44346.service: Deactivated successfully. Jul 2 07:45:32.422328 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:45:32.423010 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:45:32.423743 systemd-logind[1191]: Removed session 9. Jul 2 07:45:37.424277 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:34864.service. Jul 2 07:45:37.464642 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:37.465613 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:37.468552 systemd-logind[1191]: New session 10 of user core. Jul 2 07:45:37.469241 systemd[1]: Started session-10.scope. Jul 2 07:45:37.564904 sshd[3453]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:37.567686 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:34864.service: Deactivated successfully. Jul 2 07:45:37.568315 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:45:37.568905 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:45:37.570033 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:34870.service. Jul 2 07:45:37.570836 systemd-logind[1191]: Removed session 10. Jul 2 07:45:37.609963 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 34870 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:37.610760 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:37.613524 systemd-logind[1191]: New session 11 of user core. Jul 2 07:45:37.614220 systemd[1]: Started session-11.scope. Jul 2 07:45:37.747132 sshd[3467]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:37.750390 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:34872.service. Jul 2 07:45:37.752089 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:34870.service: Deactivated successfully. Jul 2 07:45:37.752684 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:45:37.753150 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:45:37.753860 systemd-logind[1191]: Removed session 11. Jul 2 07:45:37.797058 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:37.798209 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:37.801623 systemd-logind[1191]: New session 12 of user core. Jul 2 07:45:37.802337 systemd[1]: Started session-12.scope. Jul 2 07:45:37.901359 sshd[3477]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:37.903492 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:34872.service: Deactivated successfully. Jul 2 07:45:37.904308 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:45:37.905234 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:45:37.905985 systemd-logind[1191]: Removed session 12. Jul 2 07:45:42.905850 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:36910.service. Jul 2 07:45:42.946287 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:42.947279 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:42.950622 systemd-logind[1191]: New session 13 of user core. Jul 2 07:45:42.951342 systemd[1]: Started session-13.scope. Jul 2 07:45:43.054109 sshd[3493]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:43.056529 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:36910.service: Deactivated successfully. Jul 2 07:45:43.057189 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:45:43.057677 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:45:43.058256 systemd-logind[1191]: Removed session 13. Jul 2 07:45:48.058597 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:36922.service. Jul 2 07:45:48.106546 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 36922 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:48.108025 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:48.111757 systemd-logind[1191]: New session 14 of user core. Jul 2 07:45:48.112568 systemd[1]: Started session-14.scope. Jul 2 07:45:48.224076 sshd[3506]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:48.227041 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:36922.service: Deactivated successfully. Jul 2 07:45:48.227584 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:45:48.228152 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:45:48.229606 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:36932.service. Jul 2 07:45:48.230867 systemd-logind[1191]: Removed session 14. Jul 2 07:45:48.273276 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 36932 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:48.274671 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:48.278490 systemd-logind[1191]: New session 15 of user core. Jul 2 07:45:48.279264 systemd[1]: Started session-15.scope. Jul 2 07:45:48.429875 sshd[3519]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:48.432354 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:36932.service: Deactivated successfully. Jul 2 07:45:48.432860 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:45:48.433363 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:45:48.434167 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:36946.service. Jul 2 07:45:48.434900 systemd-logind[1191]: Removed session 15. Jul 2 07:45:48.477372 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:48.478434 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:48.482198 systemd-logind[1191]: New session 16 of user core. Jul 2 07:45:48.482994 systemd[1]: Started session-16.scope. Jul 2 07:45:49.682469 sshd[3530]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:49.684701 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:36962.service. Jul 2 07:45:49.685891 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:36946.service: Deactivated successfully. Jul 2 07:45:49.686673 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:45:49.687294 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:45:49.688108 systemd-logind[1191]: Removed session 16. Jul 2 07:45:49.727623 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:49.728588 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:49.731646 systemd-logind[1191]: New session 17 of user core. Jul 2 07:45:49.732332 systemd[1]: Started session-17.scope. Jul 2 07:45:49.925291 sshd[3548]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:49.928319 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:36962.service: Deactivated successfully. Jul 2 07:45:49.928954 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:45:49.929677 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:45:49.930740 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:36972.service. Jul 2 07:45:49.931655 systemd-logind[1191]: Removed session 17. Jul 2 07:45:49.972994 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 36972 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:49.974019 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:49.977270 systemd-logind[1191]: New session 18 of user core. Jul 2 07:45:49.978179 systemd[1]: Started session-18.scope. Jul 2 07:45:50.086696 sshd[3560]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:50.089220 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:36972.service: Deactivated successfully. Jul 2 07:45:50.090087 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:45:50.090766 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:45:50.091575 systemd-logind[1191]: Removed session 18. Jul 2 07:45:55.092186 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:54658.service. Jul 2 07:45:55.132687 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 54658 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:45:55.133926 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:45:55.137254 systemd-logind[1191]: New session 19 of user core. Jul 2 07:45:55.138221 systemd[1]: Started session-19.scope. Jul 2 07:45:55.235519 sshd[3575]: pam_unix(sshd:session): session closed for user core Jul 2 07:45:55.237464 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:54658.service: Deactivated successfully. Jul 2 07:45:55.238153 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:45:55.238754 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:45:55.239631 systemd-logind[1191]: Removed session 19. Jul 2 07:46:00.239461 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:54668.service. Jul 2 07:46:00.279888 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:00.280889 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:00.284105 systemd-logind[1191]: New session 20 of user core. Jul 2 07:46:00.285070 systemd[1]: Started session-20.scope. Jul 2 07:46:00.380272 sshd[3591]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:00.382117 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:54668.service: Deactivated successfully. Jul 2 07:46:00.382735 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:46:00.383389 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:46:00.383925 systemd-logind[1191]: Removed session 20. Jul 2 07:46:05.384379 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:58166.service. Jul 2 07:46:05.424197 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 58166 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:05.425184 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:05.428212 systemd-logind[1191]: New session 21 of user core. Jul 2 07:46:05.429111 systemd[1]: Started session-21.scope. Jul 2 07:46:05.522929 sshd[3604]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:05.524964 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:58166.service: Deactivated successfully. Jul 2 07:46:05.525607 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:46:05.526097 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:46:05.526684 systemd-logind[1191]: Removed session 21. Jul 2 07:46:10.197399 kubelet[2014]: E0702 07:46:10.197360 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:10.527888 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:58174.service. Jul 2 07:46:10.568425 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 58174 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:10.569420 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:10.572579 systemd-logind[1191]: New session 22 of user core. Jul 2 07:46:10.573434 systemd[1]: Started session-22.scope. Jul 2 07:46:10.675578 sshd[3619]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:10.678718 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:58174.service: Deactivated successfully. Jul 2 07:46:10.679248 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:46:10.681573 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:58188.service. Jul 2 07:46:10.682247 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:46:10.683320 systemd-logind[1191]: Removed session 22. Jul 2 07:46:10.724131 sshd[3632]: Accepted publickey for core from 10.0.0.1 port 58188 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:10.725429 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:10.728621 systemd-logind[1191]: New session 23 of user core. Jul 2 07:46:10.729451 systemd[1]: Started session-23.scope. Jul 2 07:46:12.191005 env[1207]: time="2024-07-02T07:46:12.190956695Z" level=info msg="StopContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" with timeout 30 (s)" Jul 2 07:46:12.198377 env[1207]: time="2024-07-02T07:46:12.191777644Z" level=info msg="Stop container \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" with signal terminated" Jul 2 07:46:12.209095 systemd[1]: run-containerd-runc-k8s.io-dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9-runc.Hp0eHM.mount: Deactivated successfully. Jul 2 07:46:12.213873 systemd[1]: cri-containerd-4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32.scope: Deactivated successfully. Jul 2 07:46:12.221019 env[1207]: time="2024-07-02T07:46:12.220950869Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:46:12.226535 env[1207]: time="2024-07-02T07:46:12.226483421Z" level=info msg="StopContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" with timeout 2 (s)" Jul 2 07:46:12.226737 env[1207]: time="2024-07-02T07:46:12.226698491Z" level=info msg="Stop container \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" with signal terminated" Jul 2 07:46:12.232339 systemd-networkd[1022]: lxc_health: Link DOWN Jul 2 07:46:12.232346 systemd-networkd[1022]: lxc_health: Lost carrier Jul 2 07:46:12.234148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32-rootfs.mount: Deactivated successfully. Jul 2 07:46:12.241420 env[1207]: time="2024-07-02T07:46:12.241363834Z" level=info msg="shim disconnected" id=4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32 Jul 2 07:46:12.241420 env[1207]: time="2024-07-02T07:46:12.241418428Z" level=warning msg="cleaning up after shim disconnected" id=4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32 namespace=k8s.io Jul 2 07:46:12.241621 env[1207]: time="2024-07-02T07:46:12.241427506Z" level=info msg="cleaning up dead shim" Jul 2 07:46:12.248470 env[1207]: time="2024-07-02T07:46:12.248408315Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" Jul 2 07:46:12.252011 env[1207]: time="2024-07-02T07:46:12.251963328Z" level=info msg="StopContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" returns successfully" Jul 2 07:46:12.252588 env[1207]: time="2024-07-02T07:46:12.252557684Z" level=info msg="StopPodSandbox for \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\"" Jul 2 07:46:12.252641 env[1207]: time="2024-07-02T07:46:12.252626366Z" level=info msg="Container to stop \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.254369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e-shm.mount: Deactivated successfully. Jul 2 07:46:12.260100 systemd[1]: cri-containerd-9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e.scope: Deactivated successfully. Jul 2 07:46:12.265992 systemd[1]: cri-containerd-dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9.scope: Deactivated successfully. Jul 2 07:46:12.266209 systemd[1]: cri-containerd-dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9.scope: Consumed 6.055s CPU time. Jul 2 07:46:12.278041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e-rootfs.mount: Deactivated successfully. Jul 2 07:46:12.283394 env[1207]: time="2024-07-02T07:46:12.283354452Z" level=info msg="shim disconnected" id=9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e Jul 2 07:46:12.283895 env[1207]: time="2024-07-02T07:46:12.283726693Z" level=warning msg="cleaning up after shim disconnected" id=9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e namespace=k8s.io Jul 2 07:46:12.283895 env[1207]: time="2024-07-02T07:46:12.283740860Z" level=info msg="cleaning up dead shim" Jul 2 07:46:12.287444 env[1207]: time="2024-07-02T07:46:12.287407997Z" level=info msg="shim disconnected" id=dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9 Jul 2 07:46:12.287579 env[1207]: time="2024-07-02T07:46:12.287446611Z" level=warning msg="cleaning up after shim disconnected" id=dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9 namespace=k8s.io Jul 2 07:46:12.287579 env[1207]: time="2024-07-02T07:46:12.287458714Z" level=info msg="cleaning up dead shim" Jul 2 07:46:12.290268 env[1207]: time="2024-07-02T07:46:12.290242444Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n" Jul 2 07:46:12.290519 env[1207]: time="2024-07-02T07:46:12.290477292Z" level=info msg="TearDown network for sandbox \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\" successfully" Jul 2 07:46:12.290519 env[1207]: time="2024-07-02T07:46:12.290500256Z" level=info msg="StopPodSandbox for \"9bdf9ec15a2be66dec66ff863f45cf2286841115c2e9c2507c52458825aff83e\" returns successfully" Jul 2 07:46:12.295670 env[1207]: time="2024-07-02T07:46:12.295635999Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Jul 2 07:46:12.298739 env[1207]: time="2024-07-02T07:46:12.298704402Z" level=info msg="StopContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" returns successfully" Jul 2 07:46:12.298970 env[1207]: time="2024-07-02T07:46:12.298937779Z" level=info msg="StopPodSandbox for \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\"" Jul 2 07:46:12.299004 env[1207]: time="2024-07-02T07:46:12.298988485Z" level=info msg="Container to stop \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.299004 env[1207]: time="2024-07-02T07:46:12.299000549Z" level=info msg="Container to stop \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.299054 env[1207]: time="2024-07-02T07:46:12.299009235Z" level=info msg="Container to stop \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.299054 env[1207]: time="2024-07-02T07:46:12.299018463Z" level=info msg="Container to stop \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.299054 env[1207]: time="2024-07-02T07:46:12.299026809Z" level=info msg="Container to stop \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:46:12.304521 systemd[1]: cri-containerd-d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e.scope: Deactivated successfully. Jul 2 07:46:12.331381 env[1207]: time="2024-07-02T07:46:12.331316198Z" level=info msg="shim disconnected" id=d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e Jul 2 07:46:12.331381 env[1207]: time="2024-07-02T07:46:12.331377376Z" level=warning msg="cleaning up after shim disconnected" id=d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e namespace=k8s.io Jul 2 07:46:12.331381 env[1207]: time="2024-07-02T07:46:12.331387234Z" level=info msg="cleaning up dead shim" Jul 2 07:46:12.337730 env[1207]: time="2024-07-02T07:46:12.337688415Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3776 runtime=io.containerd.runc.v2\n" Jul 2 07:46:12.337995 env[1207]: time="2024-07-02T07:46:12.337962529Z" level=info msg="TearDown network for sandbox \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" successfully" Jul 2 07:46:12.337995 env[1207]: time="2024-07-02T07:46:12.337986644Z" level=info msg="StopPodSandbox for \"d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e\" returns successfully" Jul 2 07:46:12.348891 kubelet[2014]: I0702 07:46:12.348831 2014 scope.go:117] "RemoveContainer" containerID="4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32" Jul 2 07:46:12.349981 env[1207]: time="2024-07-02T07:46:12.349952610Z" level=info msg="RemoveContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\"" Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351358 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-hubble-tls\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351383 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-hostproc\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351398 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cni-path\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351415 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvz98\" (UniqueName: \"kubernetes.io/projected/d997b1ee-7b07-4b6c-8ba8-f294efdff825-kube-api-access-cvz98\") pod \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\" (UID: \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\") " Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351435 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955ebbac-3681-434b-9b33-486b48420698-cilium-config-path\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354521 kubelet[2014]: I0702 07:46:12.351449 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-kernel\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351444 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-hostproc" (OuterVolumeSpecName: "hostproc") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351465 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-run\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351484 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351525 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-xtables-lock\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351553 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2cnp\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-kube-api-access-x2cnp\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.354871 kubelet[2014]: I0702 07:46:12.351574 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-net\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351589 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-cgroup\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351605 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-bpf-maps\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351623 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-lib-modules\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351648 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d997b1ee-7b07-4b6c-8ba8-f294efdff825-cilium-config-path\") pod \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\" (UID: \"d997b1ee-7b07-4b6c-8ba8-f294efdff825\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351665 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-etc-cni-netd\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355017 kubelet[2014]: I0702 07:46:12.351693 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955ebbac-3681-434b-9b33-486b48420698-clustermesh-secrets\") pod \"955ebbac-3681-434b-9b33-486b48420698\" (UID: \"955ebbac-3681-434b-9b33-486b48420698\") " Jul 2 07:46:12.355732 kubelet[2014]: I0702 07:46:12.351730 2014 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.355732 kubelet[2014]: I0702 07:46:12.352736 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355732 kubelet[2014]: I0702 07:46:12.353202 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955ebbac-3681-434b-9b33-486b48420698-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:46:12.355732 kubelet[2014]: I0702 07:46:12.353230 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355732 kubelet[2014]: I0702 07:46:12.353350 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cni-path" (OuterVolumeSpecName: "cni-path") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355853 kubelet[2014]: I0702 07:46:12.353372 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355853 kubelet[2014]: I0702 07:46:12.353386 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355853 kubelet[2014]: I0702 07:46:12.353400 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355853 kubelet[2014]: I0702 07:46:12.353411 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.355853 kubelet[2014]: I0702 07:46:12.353424 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:12.356887 kubelet[2014]: I0702 07:46:12.356863 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d997b1ee-7b07-4b6c-8ba8-f294efdff825-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d997b1ee-7b07-4b6c-8ba8-f294efdff825" (UID: "d997b1ee-7b07-4b6c-8ba8-f294efdff825"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:46:12.357361 kubelet[2014]: I0702 07:46:12.357313 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955ebbac-3681-434b-9b33-486b48420698-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:46:12.357496 kubelet[2014]: I0702 07:46:12.357462 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d997b1ee-7b07-4b6c-8ba8-f294efdff825-kube-api-access-cvz98" (OuterVolumeSpecName: "kube-api-access-cvz98") pod "d997b1ee-7b07-4b6c-8ba8-f294efdff825" (UID: "d997b1ee-7b07-4b6c-8ba8-f294efdff825"). InnerVolumeSpecName "kube-api-access-cvz98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:46:12.358609 kubelet[2014]: I0702 07:46:12.358572 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:46:12.358733 env[1207]: time="2024-07-02T07:46:12.358700204Z" level=info msg="RemoveContainer for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" returns successfully" Jul 2 07:46:12.358984 kubelet[2014]: I0702 07:46:12.358959 2014 scope.go:117] "RemoveContainer" containerID="4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32" Jul 2 07:46:12.359070 kubelet[2014]: I0702 07:46:12.359037 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-kube-api-access-x2cnp" (OuterVolumeSpecName: "kube-api-access-x2cnp") pod "955ebbac-3681-434b-9b33-486b48420698" (UID: "955ebbac-3681-434b-9b33-486b48420698"). InnerVolumeSpecName "kube-api-access-x2cnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:46:12.359208 env[1207]: time="2024-07-02T07:46:12.359148911Z" level=error msg="ContainerStatus for \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\": not found" Jul 2 07:46:12.359357 kubelet[2014]: E0702 07:46:12.359329 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\": not found" containerID="4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32" Jul 2 07:46:12.359457 kubelet[2014]: I0702 07:46:12.359427 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32"} err="failed to get container status \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c86057d82d4a3e26e56ae2365e0b9a0a89a6f6cd589aa4930406ee74f70da32\": not found" Jul 2 07:46:12.359457 kubelet[2014]: I0702 07:46:12.359450 2014 scope.go:117] "RemoveContainer" containerID="dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9" Jul 2 07:46:12.360850 env[1207]: time="2024-07-02T07:46:12.360822870Z" level=info msg="RemoveContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\"" Jul 2 07:46:12.363776 env[1207]: time="2024-07-02T07:46:12.363741197Z" level=info msg="RemoveContainer for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" returns successfully" Jul 2 07:46:12.363889 kubelet[2014]: I0702 07:46:12.363864 2014 scope.go:117] "RemoveContainer" containerID="9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc" Jul 2 07:46:12.364587 env[1207]: time="2024-07-02T07:46:12.364564220Z" level=info msg="RemoveContainer for \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\"" Jul 2 07:46:12.367153 env[1207]: time="2024-07-02T07:46:12.367130733Z" level=info msg="RemoveContainer for \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\" returns successfully" Jul 2 07:46:12.367267 kubelet[2014]: I0702 07:46:12.367244 2014 scope.go:117] "RemoveContainer" containerID="7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209" Jul 2 07:46:12.368087 env[1207]: time="2024-07-02T07:46:12.368059689Z" level=info msg="RemoveContainer for \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\"" Jul 2 07:46:12.371351 env[1207]: time="2024-07-02T07:46:12.371326070Z" level=info msg="RemoveContainer for \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\" returns successfully" Jul 2 07:46:12.371523 kubelet[2014]: I0702 07:46:12.371486 2014 scope.go:117] "RemoveContainer" containerID="7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1" Jul 2 07:46:12.372419 env[1207]: time="2024-07-02T07:46:12.372393369Z" level=info msg="RemoveContainer for \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\"" Jul 2 07:46:12.375864 env[1207]: time="2024-07-02T07:46:12.375828463Z" level=info msg="RemoveContainer for \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\" returns successfully" Jul 2 07:46:12.375991 kubelet[2014]: I0702 07:46:12.375972 2014 scope.go:117] "RemoveContainer" containerID="792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2" Jul 2 07:46:12.376732 env[1207]: time="2024-07-02T07:46:12.376705248Z" level=info msg="RemoveContainer for \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\"" Jul 2 07:46:12.379201 env[1207]: time="2024-07-02T07:46:12.379174918Z" level=info msg="RemoveContainer for \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\" returns successfully" Jul 2 07:46:12.379320 kubelet[2014]: I0702 07:46:12.379297 2014 scope.go:117] "RemoveContainer" containerID="dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9" Jul 2 07:46:12.379523 env[1207]: time="2024-07-02T07:46:12.379447269Z" level=error msg="ContainerStatus for \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\": not found" Jul 2 07:46:12.379628 kubelet[2014]: E0702 07:46:12.379608 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\": not found" containerID="dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9" Jul 2 07:46:12.379698 kubelet[2014]: I0702 07:46:12.379645 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9"} err="failed to get container status \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9\": not found" Jul 2 07:46:12.379698 kubelet[2014]: I0702 07:46:12.379654 2014 scope.go:117] "RemoveContainer" containerID="9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc" Jul 2 07:46:12.379845 env[1207]: time="2024-07-02T07:46:12.379803799Z" level=error msg="ContainerStatus for \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\": not found" Jul 2 07:46:12.379926 kubelet[2014]: E0702 07:46:12.379910 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\": not found" containerID="9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc" Jul 2 07:46:12.379926 kubelet[2014]: I0702 07:46:12.379929 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc"} err="failed to get container status \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cb5400b844c9fa55bdc0d42eabae1643edbb31f3ba848bfc1caaf239c23f4fc\": not found" Jul 2 07:46:12.379926 kubelet[2014]: I0702 07:46:12.379937 2014 scope.go:117] "RemoveContainer" containerID="7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209" Jul 2 07:46:12.380095 env[1207]: time="2024-07-02T07:46:12.380057754Z" level=error msg="ContainerStatus for \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\": not found" Jul 2 07:46:12.380248 kubelet[2014]: E0702 07:46:12.380213 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\": not found" containerID="7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209" Jul 2 07:46:12.380296 kubelet[2014]: I0702 07:46:12.380268 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209"} err="failed to get container status \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e0691d79805f29cba47c8956c64b1abe0d838f2c004da5c3bc5890374524209\": not found" Jul 2 07:46:12.380296 kubelet[2014]: I0702 07:46:12.380284 2014 scope.go:117] "RemoveContainer" containerID="7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1" Jul 2 07:46:12.380487 env[1207]: time="2024-07-02T07:46:12.380445546Z" level=error msg="ContainerStatus for \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\": not found" Jul 2 07:46:12.380638 kubelet[2014]: E0702 07:46:12.380624 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\": not found" containerID="7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1" Jul 2 07:46:12.380700 kubelet[2014]: I0702 07:46:12.380648 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1"} err="failed to get container status \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b644f73c1357b17b32b963001fedcaacee599afbbefe49b6c8436d056edf2a1\": not found" Jul 2 07:46:12.380700 kubelet[2014]: I0702 07:46:12.380661 2014 scope.go:117] "RemoveContainer" containerID="792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2" Jul 2 07:46:12.380843 env[1207]: time="2024-07-02T07:46:12.380805303Z" level=error msg="ContainerStatus for \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\": not found" Jul 2 07:46:12.380930 kubelet[2014]: E0702 07:46:12.380916 2014 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\": not found" containerID="792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2" Jul 2 07:46:12.380962 kubelet[2014]: I0702 07:46:12.380940 2014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2"} err="failed to get container status \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"792277862d60e6521c70edd8686ae435154ea77c4bd08bae15bbf701ae6f46c2\": not found" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452324 2014 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452354 2014 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/955ebbac-3681-434b-9b33-486b48420698-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452365 2014 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452373 2014 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452382 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d997b1ee-7b07-4b6c-8ba8-f294efdff825-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452391 2014 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452398 2014 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452443 kubelet[2014]: I0702 07:46:12.452408 2014 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cvz98\" (UniqueName: \"kubernetes.io/projected/d997b1ee-7b07-4b6c-8ba8-f294efdff825-kube-api-access-cvz98\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452416 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/955ebbac-3681-434b-9b33-486b48420698-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452425 2014 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452433 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452442 2014 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452451 2014 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x2cnp\" (UniqueName: \"kubernetes.io/projected/955ebbac-3681-434b-9b33-486b48420698-kube-api-access-x2cnp\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452460 2014 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.452751 kubelet[2014]: I0702 07:46:12.452470 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/955ebbac-3681-434b-9b33-486b48420698-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:12.651918 systemd[1]: Removed slice kubepods-besteffort-podd997b1ee_7b07_4b6c_8ba8_f294efdff825.slice. Jul 2 07:46:12.662796 systemd[1]: Removed slice kubepods-burstable-pod955ebbac_3681_434b_9b33_486b48420698.slice. Jul 2 07:46:12.662875 systemd[1]: kubepods-burstable-pod955ebbac_3681_434b_9b33_486b48420698.slice: Consumed 6.142s CPU time. Jul 2 07:46:13.197272 kubelet[2014]: E0702 07:46:13.197218 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:13.201709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd7f5985e0b3815227cfa9aaefb9c98137c7c93ac68edf86a6b07d4b8d25dda9-rootfs.mount: Deactivated successfully. Jul 2 07:46:13.201808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e-rootfs.mount: Deactivated successfully. Jul 2 07:46:13.201856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5194990bd5f9049e39d854029b8c0f05d56a5a76a9142d490fcc7d8cbe70a2e-shm.mount: Deactivated successfully. Jul 2 07:46:13.201906 systemd[1]: var-lib-kubelet-pods-d997b1ee\x2d7b07\x2d4b6c\x2d8ba8\x2df294efdff825-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvz98.mount: Deactivated successfully. Jul 2 07:46:13.201957 systemd[1]: var-lib-kubelet-pods-955ebbac\x2d3681\x2d434b\x2d9b33\x2d486b48420698-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx2cnp.mount: Deactivated successfully. Jul 2 07:46:13.202004 systemd[1]: var-lib-kubelet-pods-955ebbac\x2d3681\x2d434b\x2d9b33\x2d486b48420698-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:46:13.202056 systemd[1]: var-lib-kubelet-pods-955ebbac\x2d3681\x2d434b\x2d9b33\x2d486b48420698-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:46:14.015927 sshd[3632]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:14.018857 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:58188.service: Deactivated successfully. Jul 2 07:46:14.019565 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:46:14.020165 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:46:14.021378 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:38970.service. Jul 2 07:46:14.022254 systemd-logind[1191]: Removed session 23. Jul 2 07:46:14.061601 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 38970 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:14.062711 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:14.066010 systemd-logind[1191]: New session 24 of user core. Jul 2 07:46:14.066926 systemd[1]: Started session-24.scope. Jul 2 07:46:14.197634 kubelet[2014]: E0702 07:46:14.197591 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:14.199603 kubelet[2014]: I0702 07:46:14.199560 2014 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="955ebbac-3681-434b-9b33-486b48420698" path="/var/lib/kubelet/pods/955ebbac-3681-434b-9b33-486b48420698/volumes" Jul 2 07:46:14.200205 kubelet[2014]: I0702 07:46:14.200178 2014 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d997b1ee-7b07-4b6c-8ba8-f294efdff825" path="/var/lib/kubelet/pods/d997b1ee-7b07-4b6c-8ba8-f294efdff825/volumes" Jul 2 07:46:14.465757 sshd[3795]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:14.469653 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:38972.service. Jul 2 07:46:14.470827 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:38970.service: Deactivated successfully. Jul 2 07:46:14.471850 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:46:14.472429 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:46:14.473209 systemd-logind[1191]: Removed session 24. Jul 2 07:46:14.496639 kubelet[2014]: I0702 07:46:14.496610 2014 topology_manager.go:215] "Topology Admit Handler" podUID="d10800de-9e83-4571-9d50-48bcb31fdef0" podNamespace="kube-system" podName="cilium-s69qr" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496661 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d997b1ee-7b07-4b6c-8ba8-f294efdff825" containerName="cilium-operator" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496669 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="clean-cilium-state" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496675 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="cilium-agent" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496682 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="mount-cgroup" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496688 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="apply-sysctl-overwrites" Jul 2 07:46:14.496810 kubelet[2014]: E0702 07:46:14.496701 2014 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="mount-bpf-fs" Jul 2 07:46:14.496810 kubelet[2014]: I0702 07:46:14.496720 2014 memory_manager.go:354] "RemoveStaleState removing state" podUID="955ebbac-3681-434b-9b33-486b48420698" containerName="cilium-agent" Jul 2 07:46:14.496810 kubelet[2014]: I0702 07:46:14.496726 2014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d997b1ee-7b07-4b6c-8ba8-f294efdff825" containerName="cilium-operator" Jul 2 07:46:14.500566 systemd[1]: Created slice kubepods-burstable-podd10800de_9e83_4571_9d50_48bcb31fdef0.slice. Jul 2 07:46:14.515719 sshd[3807]: Accepted publickey for core from 10.0.0.1 port 38972 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:14.516933 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:14.527821 systemd[1]: Started session-25.scope. Jul 2 07:46:14.528684 systemd-logind[1191]: New session 25 of user core. Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563161 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-run\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563206 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-hostproc\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563224 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-kernel\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563244 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-clustermesh-secrets\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563264 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-config-path\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563612 kubelet[2014]: I0702 07:46:14.563279 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-net\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563296 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-cgroup\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563311 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-xtables-lock\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563329 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-bpf-maps\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563348 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-ipsec-secrets\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563365 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-hubble-tls\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.563921 kubelet[2014]: I0702 07:46:14.563382 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d26pf\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-kube-api-access-d26pf\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.564054 kubelet[2014]: I0702 07:46:14.563398 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cni-path\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.564054 kubelet[2014]: I0702 07:46:14.563421 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-etc-cni-netd\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.564054 kubelet[2014]: I0702 07:46:14.563438 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-lib-modules\") pod \"cilium-s69qr\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " pod="kube-system/cilium-s69qr" Jul 2 07:46:14.644395 sshd[3807]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:14.647459 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:38986.service. Jul 2 07:46:14.649820 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:38972.service: Deactivated successfully. Jul 2 07:46:14.650412 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:46:14.651324 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:46:14.653910 systemd-logind[1191]: Removed session 25. Jul 2 07:46:14.655054 kubelet[2014]: E0702 07:46:14.655018 2014 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-d26pf lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-s69qr" podUID="d10800de-9e83-4571-9d50-48bcb31fdef0" Jul 2 07:46:14.690503 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 38986 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:46:14.691831 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:46:14.694985 systemd-logind[1191]: New session 26 of user core. Jul 2 07:46:14.695802 systemd[1]: Started session-26.scope. Jul 2 07:46:15.228527 kubelet[2014]: E0702 07:46:15.228483 2014 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:46:15.468335 kubelet[2014]: I0702 07:46:15.468274 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-kernel\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468335 kubelet[2014]: I0702 07:46:15.468326 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-ipsec-secrets\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468335 kubelet[2014]: I0702 07:46:15.468342 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-bpf-maps\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468360 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d26pf\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-kube-api-access-d26pf\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468375 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-cgroup\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468389 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-net\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468405 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-lib-modules\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468404 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.468603 kubelet[2014]: I0702 07:46:15.468425 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-config-path\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468774 kubelet[2014]: I0702 07:46:15.468439 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-xtables-lock\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468774 kubelet[2014]: I0702 07:46:15.468446 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.468774 kubelet[2014]: I0702 07:46:15.468453 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-run\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468774 kubelet[2014]: I0702 07:46:15.468450 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.468774 kubelet[2014]: I0702 07:46:15.468487 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cni-path\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468467 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468478 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468536 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-hubble-tls\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468579 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-etc-cni-netd\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468602 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-hostproc\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.468897 kubelet[2014]: I0702 07:46:15.468625 2014 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-clustermesh-secrets\") pod \"d10800de-9e83-4571-9d50-48bcb31fdef0\" (UID: \"d10800de-9e83-4571-9d50-48bcb31fdef0\") " Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.468669 2014 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.468678 2014 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.468686 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.468705 2014 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.468715 2014 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.469066 kubelet[2014]: I0702 07:46:15.469029 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.470052 kubelet[2014]: I0702 07:46:15.470028 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:46:15.470107 kubelet[2014]: I0702 07:46:15.470059 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.470107 kubelet[2014]: I0702 07:46:15.470072 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.471633 kubelet[2014]: I0702 07:46:15.471613 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:46:15.471762 kubelet[2014]: I0702 07:46:15.471743 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-hostproc" (OuterVolumeSpecName: "hostproc") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.471856 kubelet[2014]: I0702 07:46:15.471839 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cni-path" (OuterVolumeSpecName: "cni-path") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:46:15.472170 systemd[1]: var-lib-kubelet-pods-d10800de\x2d9e83\x2d4571\x2d9d50\x2d48bcb31fdef0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:46:15.473659 kubelet[2014]: I0702 07:46:15.472253 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:46:15.473659 kubelet[2014]: I0702 07:46:15.472925 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:46:15.473659 kubelet[2014]: I0702 07:46:15.473450 2014 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-kube-api-access-d26pf" (OuterVolumeSpecName: "kube-api-access-d26pf") pod "d10800de-9e83-4571-9d50-48bcb31fdef0" (UID: "d10800de-9e83-4571-9d50-48bcb31fdef0"). InnerVolumeSpecName "kube-api-access-d26pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:46:15.473973 systemd[1]: var-lib-kubelet-pods-d10800de\x2d9e83\x2d4571\x2d9d50\x2d48bcb31fdef0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd26pf.mount: Deactivated successfully. Jul 2 07:46:15.474043 systemd[1]: var-lib-kubelet-pods-d10800de\x2d9e83\x2d4571\x2d9d50\x2d48bcb31fdef0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:46:15.474090 systemd[1]: var-lib-kubelet-pods-d10800de\x2d9e83\x2d4571\x2d9d50\x2d48bcb31fdef0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569749 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569772 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569782 2014 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569791 2014 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569800 2014 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569807 2014 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569815 2014 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d10800de-9e83-4571-9d50-48bcb31fdef0-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.569819 kubelet[2014]: I0702 07:46:15.569823 2014 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.570041 kubelet[2014]: I0702 07:46:15.569831 2014 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d10800de-9e83-4571-9d50-48bcb31fdef0-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:15.570041 kubelet[2014]: I0702 07:46:15.569841 2014 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d26pf\" (UniqueName: \"kubernetes.io/projected/d10800de-9e83-4571-9d50-48bcb31fdef0-kube-api-access-d26pf\") on node \"localhost\" DevicePath \"\"" Jul 2 07:46:16.202015 systemd[1]: Removed slice kubepods-burstable-podd10800de_9e83_4571_9d50_48bcb31fdef0.slice. Jul 2 07:46:16.391035 kubelet[2014]: I0702 07:46:16.390985 2014 topology_manager.go:215] "Topology Admit Handler" podUID="91b98e07-a658-448a-a8c6-bbfd26411605" podNamespace="kube-system" podName="cilium-ztdzn" Jul 2 07:46:16.396547 systemd[1]: Created slice kubepods-burstable-pod91b98e07_a658_448a_a8c6_bbfd26411605.slice. Jul 2 07:46:16.474600 kubelet[2014]: I0702 07:46:16.474566 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-lib-modules\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474738 kubelet[2014]: I0702 07:46:16.474609 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-cilium-run\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474738 kubelet[2014]: I0702 07:46:16.474628 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91b98e07-a658-448a-a8c6-bbfd26411605-clustermesh-secrets\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474738 kubelet[2014]: I0702 07:46:16.474647 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljk4m\" (UniqueName: \"kubernetes.io/projected/91b98e07-a658-448a-a8c6-bbfd26411605-kube-api-access-ljk4m\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474738 kubelet[2014]: I0702 07:46:16.474665 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-bpf-maps\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474738 kubelet[2014]: I0702 07:46:16.474714 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-cilium-cgroup\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474863 kubelet[2014]: I0702 07:46:16.474741 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91b98e07-a658-448a-a8c6-bbfd26411605-cilium-ipsec-secrets\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474863 kubelet[2014]: I0702 07:46:16.474760 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91b98e07-a658-448a-a8c6-bbfd26411605-cilium-config-path\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474863 kubelet[2014]: I0702 07:46:16.474841 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-host-proc-sys-kernel\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474929 kubelet[2014]: I0702 07:46:16.474902 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-hostproc\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474958 kubelet[2014]: I0702 07:46:16.474936 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-host-proc-sys-net\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.474958 kubelet[2014]: I0702 07:46:16.474954 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91b98e07-a658-448a-a8c6-bbfd26411605-hubble-tls\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.475024 kubelet[2014]: I0702 07:46:16.475011 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-xtables-lock\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.475068 kubelet[2014]: I0702 07:46:16.475057 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-etc-cni-netd\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.475112 kubelet[2014]: I0702 07:46:16.475099 2014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91b98e07-a658-448a-a8c6-bbfd26411605-cni-path\") pod \"cilium-ztdzn\" (UID: \"91b98e07-a658-448a-a8c6-bbfd26411605\") " pod="kube-system/cilium-ztdzn" Jul 2 07:46:16.699686 kubelet[2014]: E0702 07:46:16.699644 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:16.700223 env[1207]: time="2024-07-02T07:46:16.700163592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztdzn,Uid:91b98e07-a658-448a-a8c6-bbfd26411605,Namespace:kube-system,Attempt:0,}" Jul 2 07:46:16.788474 env[1207]: time="2024-07-02T07:46:16.788346795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:46:16.788474 env[1207]: time="2024-07-02T07:46:16.788390830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:46:16.788474 env[1207]: time="2024-07-02T07:46:16.788404425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:46:16.788621 env[1207]: time="2024-07-02T07:46:16.788578668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2 pid=3850 runtime=io.containerd.runc.v2 Jul 2 07:46:16.802756 systemd[1]: Started cri-containerd-6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2.scope. Jul 2 07:46:16.826778 env[1207]: time="2024-07-02T07:46:16.826740809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztdzn,Uid:91b98e07-a658-448a-a8c6-bbfd26411605,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\"" Jul 2 07:46:16.827698 kubelet[2014]: E0702 07:46:16.827483 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:16.829028 env[1207]: time="2024-07-02T07:46:16.829007964Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:46:16.844584 env[1207]: time="2024-07-02T07:46:16.844486968Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b\"" Jul 2 07:46:16.844949 env[1207]: time="2024-07-02T07:46:16.844927148Z" level=info msg="StartContainer for \"360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b\"" Jul 2 07:46:16.857978 systemd[1]: Started cri-containerd-360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b.scope. Jul 2 07:46:16.931424 env[1207]: time="2024-07-02T07:46:16.931361275Z" level=info msg="StartContainer for \"360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b\" returns successfully" Jul 2 07:46:16.931402 systemd[1]: cri-containerd-360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b.scope: Deactivated successfully. Jul 2 07:46:17.046286 env[1207]: time="2024-07-02T07:46:17.046144078Z" level=info msg="shim disconnected" id=360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b Jul 2 07:46:17.046286 env[1207]: time="2024-07-02T07:46:17.046191179Z" level=warning msg="cleaning up after shim disconnected" id=360d63dbf4e5af3f659c1cfde31e9431c2ce36f160300e1a7cc1781462251a8b namespace=k8s.io Jul 2 07:46:17.046286 env[1207]: time="2024-07-02T07:46:17.046202390Z" level=info msg="cleaning up dead shim" Jul 2 07:46:17.052370 env[1207]: time="2024-07-02T07:46:17.052329792Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3938 runtime=io.containerd.runc.v2\n" Jul 2 07:46:17.197149 kubelet[2014]: E0702 07:46:17.197113 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:17.369695 kubelet[2014]: E0702 07:46:17.369361 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:17.370998 env[1207]: time="2024-07-02T07:46:17.370964434Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:46:17.456695 env[1207]: time="2024-07-02T07:46:17.456630569Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312\"" Jul 2 07:46:17.457234 env[1207]: time="2024-07-02T07:46:17.457188653Z" level=info msg="StartContainer for \"383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312\"" Jul 2 07:46:17.471011 systemd[1]: Started cri-containerd-383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312.scope. Jul 2 07:46:17.495957 env[1207]: time="2024-07-02T07:46:17.495903098Z" level=info msg="StartContainer for \"383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312\" returns successfully" Jul 2 07:46:17.499201 systemd[1]: cri-containerd-383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312.scope: Deactivated successfully. Jul 2 07:46:17.521726 env[1207]: time="2024-07-02T07:46:17.521650018Z" level=info msg="shim disconnected" id=383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312 Jul 2 07:46:17.521726 env[1207]: time="2024-07-02T07:46:17.521722367Z" level=warning msg="cleaning up after shim disconnected" id=383d60d9bd8db9c13078f53cc9bea0ad61bffad06e43a2065357c3a2e9e6b312 namespace=k8s.io Jul 2 07:46:17.521726 env[1207]: time="2024-07-02T07:46:17.521732756Z" level=info msg="cleaning up dead shim" Jul 2 07:46:17.531082 env[1207]: time="2024-07-02T07:46:17.531020695Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3999 runtime=io.containerd.runc.v2\n" Jul 2 07:46:17.785192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671007497.mount: Deactivated successfully. Jul 2 07:46:18.199281 kubelet[2014]: I0702 07:46:18.199164 2014 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d10800de-9e83-4571-9d50-48bcb31fdef0" path="/var/lib/kubelet/pods/d10800de-9e83-4571-9d50-48bcb31fdef0/volumes" Jul 2 07:46:18.372955 kubelet[2014]: E0702 07:46:18.372926 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:18.374725 env[1207]: time="2024-07-02T07:46:18.374675509Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:46:18.834635 env[1207]: time="2024-07-02T07:46:18.834576305Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679\"" Jul 2 07:46:18.835408 env[1207]: time="2024-07-02T07:46:18.835146222Z" level=info msg="StartContainer for \"eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679\"" Jul 2 07:46:18.859208 systemd[1]: run-containerd-runc-k8s.io-eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679-runc.jaqqoV.mount: Deactivated successfully. Jul 2 07:46:18.865474 systemd[1]: Started cri-containerd-eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679.scope. Jul 2 07:46:18.890006 systemd[1]: cri-containerd-eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679.scope: Deactivated successfully. Jul 2 07:46:18.890184 env[1207]: time="2024-07-02T07:46:18.890063027Z" level=info msg="StartContainer for \"eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679\" returns successfully" Jul 2 07:46:18.907135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679-rootfs.mount: Deactivated successfully. Jul 2 07:46:18.912783 env[1207]: time="2024-07-02T07:46:18.912730949Z" level=info msg="shim disconnected" id=eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679 Jul 2 07:46:18.912783 env[1207]: time="2024-07-02T07:46:18.912777588Z" level=warning msg="cleaning up after shim disconnected" id=eb03f4e876225642e13e5b4c767aa75b8de340bf47b32536f6ba0114b5d87679 namespace=k8s.io Jul 2 07:46:18.912783 env[1207]: time="2024-07-02T07:46:18.912787236Z" level=info msg="cleaning up dead shim" Jul 2 07:46:18.920098 env[1207]: time="2024-07-02T07:46:18.920057442Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Jul 2 07:46:19.375501 kubelet[2014]: E0702 07:46:19.375459 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:19.376949 env[1207]: time="2024-07-02T07:46:19.376911735Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:46:19.656073 env[1207]: time="2024-07-02T07:46:19.655969061Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea\"" Jul 2 07:46:19.656441 env[1207]: time="2024-07-02T07:46:19.656411323Z" level=info msg="StartContainer for \"ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea\"" Jul 2 07:46:19.668281 systemd[1]: Started cri-containerd-ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea.scope. Jul 2 07:46:19.688094 systemd[1]: cri-containerd-ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea.scope: Deactivated successfully. Jul 2 07:46:19.691638 env[1207]: time="2024-07-02T07:46:19.691600213Z" level=info msg="StartContainer for \"ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea\" returns successfully" Jul 2 07:46:19.708606 env[1207]: time="2024-07-02T07:46:19.708557188Z" level=info msg="shim disconnected" id=ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea Jul 2 07:46:19.708606 env[1207]: time="2024-07-02T07:46:19.708597415Z" level=warning msg="cleaning up after shim disconnected" id=ad455820d6b7b2a78c5ee8abc5f3b41e245aade886b4446f1956e9f7301a71ea namespace=k8s.io Jul 2 07:46:19.708606 env[1207]: time="2024-07-02T07:46:19.708607945Z" level=info msg="cleaning up dead shim" Jul 2 07:46:19.715052 env[1207]: time="2024-07-02T07:46:19.715008009Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:46:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4109 runtime=io.containerd.runc.v2\n" Jul 2 07:46:20.229340 kubelet[2014]: E0702 07:46:20.229311 2014 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:46:20.378799 kubelet[2014]: E0702 07:46:20.378771 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:20.380764 env[1207]: time="2024-07-02T07:46:20.380708564Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:46:20.394890 env[1207]: time="2024-07-02T07:46:20.394830197Z" level=info msg="CreateContainer within sandbox \"6e2a3c29841d889a17a25e4cfbfa657c477fdc1a6043fbb76b67efa2682cfcb2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6\"" Jul 2 07:46:20.395305 env[1207]: time="2024-07-02T07:46:20.395283570Z" level=info msg="StartContainer for \"ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6\"" Jul 2 07:46:20.410164 systemd[1]: Started cri-containerd-ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6.scope. Jul 2 07:46:20.432264 env[1207]: time="2024-07-02T07:46:20.432222588Z" level=info msg="StartContainer for \"ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6\" returns successfully" Jul 2 07:46:20.676548 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:46:20.842402 systemd[1]: run-containerd-runc-k8s.io-ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6-runc.6xsAoe.mount: Deactivated successfully. Jul 2 07:46:21.384012 kubelet[2014]: E0702 07:46:21.383974 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:21.395252 kubelet[2014]: I0702 07:46:21.395199 2014 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ztdzn" podStartSLOduration=5.395157108 podStartE2EDuration="5.395157108s" podCreationTimestamp="2024-07-02 07:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:46:21.394567125 +0000 UTC m=+91.299082840" watchObservedRunningTime="2024-07-02 07:46:21.395157108 +0000 UTC m=+91.299672803" Jul 2 07:46:22.369663 kubelet[2014]: I0702 07:46:22.369637 2014 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:46:22Z","lastTransitionTime":"2024-07-02T07:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:46:22.701411 kubelet[2014]: E0702 07:46:22.701298 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:23.170835 systemd-networkd[1022]: lxc_health: Link UP Jul 2 07:46:23.179613 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:46:23.179560 systemd-networkd[1022]: lxc_health: Gained carrier Jul 2 07:46:24.626664 systemd-networkd[1022]: lxc_health: Gained IPv6LL Jul 2 07:46:24.701883 kubelet[2014]: E0702 07:46:24.701844 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:25.095872 systemd[1]: run-containerd-runc-k8s.io-ef549d6444e72ae53e834ce4e8b9f910300ec0450c308bfe1559065b2460e0d6-runc.ZSZuFb.mount: Deactivated successfully. Jul 2 07:46:25.391028 kubelet[2014]: E0702 07:46:25.390920 2014 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:46:29.310441 sshd[3820]: pam_unix(sshd:session): session closed for user core Jul 2 07:46:29.312463 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:38986.service: Deactivated successfully. Jul 2 07:46:29.313164 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:46:29.313682 systemd-logind[1191]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:46:29.314339 systemd-logind[1191]: Removed session 26.