Jul 2 07:42:17.813549 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:42:17.813567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:42:17.813575 kernel: BIOS-provided physical RAM map: Jul 2 07:42:17.813581 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 07:42:17.813594 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 07:42:17.813600 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 07:42:17.813606 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 07:42:17.813612 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 07:42:17.813619 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 07:42:17.813624 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 07:42:17.813629 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 07:42:17.813635 kernel: NX (Execute Disable) protection: active Jul 2 07:42:17.813640 kernel: SMBIOS 2.8 present. Jul 2 07:42:17.813646 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 07:42:17.813654 kernel: Hypervisor detected: KVM Jul 2 07:42:17.813660 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:42:17.813666 kernel: kvm-clock: cpu 0, msr d192001, primary cpu clock Jul 2 07:42:17.813672 kernel: kvm-clock: using sched offset of 2378280284 cycles Jul 2 07:42:17.813678 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:42:17.813684 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:42:17.813690 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:42:17.813696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:42:17.813702 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 07:42:17.813709 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:42:17.813715 kernel: Using GB pages for direct mapping Jul 2 07:42:17.813721 kernel: ACPI: Early table checksum verification disabled Jul 2 07:42:17.813727 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 07:42:17.813733 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813739 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813745 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813751 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 07:42:17.813757 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813764 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813770 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:42:17.813776 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 07:42:17.813782 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 07:42:17.813788 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 07:42:17.813794 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 07:42:17.813800 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 07:42:17.813806 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 07:42:17.813815 kernel: No NUMA configuration found Jul 2 07:42:17.813821 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 07:42:17.813828 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 07:42:17.813834 kernel: Zone ranges: Jul 2 07:42:17.813841 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:42:17.813847 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 07:42:17.813855 kernel: Normal empty Jul 2 07:42:17.813861 kernel: Movable zone start for each node Jul 2 07:42:17.813868 kernel: Early memory node ranges Jul 2 07:42:17.813874 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 07:42:17.813880 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 07:42:17.813887 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 07:42:17.813893 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:42:17.813899 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 07:42:17.813906 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 07:42:17.813913 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 07:42:17.813919 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:42:17.813926 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:42:17.813932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:42:17.813939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:42:17.813945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:42:17.813952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:42:17.813958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:42:17.813964 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:42:17.813972 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:42:17.813978 kernel: TSC deadline timer available Jul 2 07:42:17.813984 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:42:17.813991 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:42:17.813997 kernel: kvm-guest: setup PV sched yield Jul 2 07:42:17.814003 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 07:42:17.814010 kernel: Booting paravirtualized kernel on KVM Jul 2 07:42:17.814016 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:42:17.814023 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:42:17.814029 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:42:17.814037 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:42:17.814043 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:42:17.814049 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:42:17.814055 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 2 07:42:17.814062 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:42:17.814068 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:42:17.814074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 07:42:17.814081 kernel: Policy zone: DMA32 Jul 2 07:42:17.814088 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:42:17.814096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:42:17.814103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:42:17.814109 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:42:17.814116 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:42:17.814122 kernel: Memory: 2436704K/2571756K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 134792K reserved, 0K cma-reserved) Jul 2 07:42:17.814129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:42:17.814135 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:42:17.814142 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:42:17.814149 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:42:17.814156 kernel: rcu: RCU event tracing is enabled. Jul 2 07:42:17.814163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:42:17.814169 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:42:17.814176 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:42:17.814182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:42:17.814189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:42:17.814195 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:42:17.814201 kernel: random: crng init done Jul 2 07:42:17.814209 kernel: Console: colour VGA+ 80x25 Jul 2 07:42:17.814215 kernel: printk: console [ttyS0] enabled Jul 2 07:42:17.814222 kernel: ACPI: Core revision 20210730 Jul 2 07:42:17.814228 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:42:17.814235 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:42:17.814241 kernel: x2apic enabled Jul 2 07:42:17.814247 kernel: Switched APIC routing to physical x2apic. Jul 2 07:42:17.814254 kernel: kvm-guest: setup PV IPIs Jul 2 07:42:17.814260 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:42:17.814277 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:42:17.814292 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:42:17.814311 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:42:17.814320 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:42:17.814327 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:42:17.814333 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:42:17.814340 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:42:17.814347 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:42:17.814353 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:42:17.814375 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:42:17.814382 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:42:17.814389 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:42:17.814397 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:42:17.814404 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:42:17.814410 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:42:17.814417 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:42:17.814424 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:42:17.814431 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:42:17.814439 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:42:17.814445 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:42:17.814452 kernel: LSM: Security Framework initializing Jul 2 07:42:17.814459 kernel: SELinux: Initializing. Jul 2 07:42:17.814465 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:42:17.814472 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:42:17.814479 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:42:17.814487 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:42:17.814494 kernel: ... version: 0 Jul 2 07:42:17.814500 kernel: ... bit width: 48 Jul 2 07:42:17.814510 kernel: ... generic registers: 6 Jul 2 07:42:17.814517 kernel: ... value mask: 0000ffffffffffff Jul 2 07:42:17.814524 kernel: ... max period: 00007fffffffffff Jul 2 07:42:17.814531 kernel: ... fixed-purpose events: 0 Jul 2 07:42:17.814537 kernel: ... event mask: 000000000000003f Jul 2 07:42:17.814544 kernel: signal: max sigframe size: 1776 Jul 2 07:42:17.814552 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:42:17.814558 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:42:17.814565 kernel: x86: Booting SMP configuration: Jul 2 07:42:17.814572 kernel: .... node #0, CPUs: #1 Jul 2 07:42:17.814578 kernel: kvm-clock: cpu 1, msr d192041, secondary cpu clock Jul 2 07:42:17.814591 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:42:17.814598 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 2 07:42:17.814605 kernel: #2 Jul 2 07:42:17.814612 kernel: kvm-clock: cpu 2, msr d192081, secondary cpu clock Jul 2 07:42:17.814619 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:42:17.814627 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 2 07:42:17.814634 kernel: #3 Jul 2 07:42:17.814640 kernel: kvm-clock: cpu 3, msr d1920c1, secondary cpu clock Jul 2 07:42:17.814647 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:42:17.814653 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 2 07:42:17.814660 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:42:17.814667 kernel: smpboot: Max logical packages: 1 Jul 2 07:42:17.814674 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:42:17.814680 kernel: devtmpfs: initialized Jul 2 07:42:17.814688 kernel: x86/mm: Memory block size: 128MB Jul 2 07:42:17.814695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:42:17.814702 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:42:17.814709 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:42:17.814715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:42:17.814722 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:42:17.814729 kernel: audit: type=2000 audit(1719906137.513:1): state=initialized audit_enabled=0 res=1 Jul 2 07:42:17.814735 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:42:17.814742 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:42:17.814750 kernel: cpuidle: using governor menu Jul 2 07:42:17.814756 kernel: ACPI: bus type PCI registered Jul 2 07:42:17.814763 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:42:17.814770 kernel: dca service started, version 1.12.1 Jul 2 07:42:17.814777 kernel: PCI: Using configuration type 1 for base access Jul 2 07:42:17.814783 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:42:17.814790 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:42:17.814797 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:42:17.814804 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:42:17.814811 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:42:17.814818 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:42:17.814825 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:42:17.814831 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:42:17.814838 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:42:17.814845 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:42:17.814851 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:42:17.814858 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:42:17.814865 kernel: ACPI: Interpreter enabled Jul 2 07:42:17.814871 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:42:17.814879 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:42:17.814886 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:42:17.814893 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:42:17.814899 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:42:17.815009 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:42:17.815032 kernel: acpiphp: Slot [3] registered Jul 2 07:42:17.815039 kernel: acpiphp: Slot [4] registered Jul 2 07:42:17.815047 kernel: acpiphp: Slot [5] registered Jul 2 07:42:17.815054 kernel: acpiphp: Slot [6] registered Jul 2 07:42:17.815060 kernel: acpiphp: Slot [7] registered Jul 2 07:42:17.815067 kernel: acpiphp: Slot [8] registered Jul 2 07:42:17.815074 kernel: acpiphp: Slot [9] registered Jul 2 07:42:17.815080 kernel: acpiphp: Slot [10] registered Jul 2 07:42:17.815087 kernel: acpiphp: Slot [11] registered Jul 2 07:42:17.815093 kernel: acpiphp: Slot [12] registered Jul 2 07:42:17.815100 kernel: acpiphp: Slot [13] registered Jul 2 07:42:17.815107 kernel: acpiphp: Slot [14] registered Jul 2 07:42:17.815115 kernel: acpiphp: Slot [15] registered Jul 2 07:42:17.815121 kernel: acpiphp: Slot [16] registered Jul 2 07:42:17.815128 kernel: acpiphp: Slot [17] registered Jul 2 07:42:17.815135 kernel: acpiphp: Slot [18] registered Jul 2 07:42:17.815141 kernel: acpiphp: Slot [19] registered Jul 2 07:42:17.815148 kernel: acpiphp: Slot [20] registered Jul 2 07:42:17.815154 kernel: acpiphp: Slot [21] registered Jul 2 07:42:17.815161 kernel: acpiphp: Slot [22] registered Jul 2 07:42:17.815168 kernel: acpiphp: Slot [23] registered Jul 2 07:42:17.815175 kernel: acpiphp: Slot [24] registered Jul 2 07:42:17.815182 kernel: acpiphp: Slot [25] registered Jul 2 07:42:17.815188 kernel: acpiphp: Slot [26] registered Jul 2 07:42:17.815195 kernel: acpiphp: Slot [27] registered Jul 2 07:42:17.815201 kernel: acpiphp: Slot [28] registered Jul 2 07:42:17.815208 kernel: acpiphp: Slot [29] registered Jul 2 07:42:17.815215 kernel: acpiphp: Slot [30] registered Jul 2 07:42:17.815221 kernel: acpiphp: Slot [31] registered Jul 2 07:42:17.815228 kernel: PCI host bridge to bus 0000:00 Jul 2 07:42:17.815306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:42:17.815396 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:42:17.815461 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:42:17.815520 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:42:17.815581 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 07:42:17.815649 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:42:17.815730 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:42:17.815809 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:42:17.815891 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:42:17.815961 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:42:17.816029 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:42:17.816098 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:42:17.816168 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:42:17.816236 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:42:17.816314 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:42:17.816396 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 07:42:17.816487 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 07:42:17.816563 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:42:17.816682 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 07:42:17.816760 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 07:42:17.816832 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 07:42:17.816898 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:42:17.816974 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:42:17.817044 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:42:17.817115 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 07:42:17.817182 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 07:42:17.817257 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:42:17.817327 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:42:17.817410 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 07:42:17.817478 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 07:42:17.817553 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:42:17.817631 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:42:17.817701 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 07:42:17.817767 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 07:42:17.817838 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 07:42:17.817847 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:42:17.817855 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:42:17.817862 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:42:17.817868 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:42:17.817875 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:42:17.817882 kernel: iommu: Default domain type: Translated Jul 2 07:42:17.817889 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:42:17.817956 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:42:17.818026 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:42:17.818094 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:42:17.818102 kernel: vgaarb: loaded Jul 2 07:42:17.818109 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:42:17.818116 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:42:17.818123 kernel: PTP clock support registered Jul 2 07:42:17.818130 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:42:17.818137 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:42:17.818145 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 07:42:17.818152 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 07:42:17.818159 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:42:17.818166 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:42:17.818172 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:42:17.818179 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:42:17.818186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:42:17.818193 kernel: pnp: PnP ACPI init Jul 2 07:42:17.818269 kernel: pnp 00:02: [dma 2] Jul 2 07:42:17.818280 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:42:17.818287 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:42:17.818294 kernel: NET: Registered PF_INET protocol family Jul 2 07:42:17.818301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:42:17.818308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:42:17.818315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:42:17.818322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:42:17.818329 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:42:17.818337 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:42:17.818344 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:42:17.818351 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:42:17.818358 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:42:17.818377 kernel: NET: Registered PF_XDP protocol family Jul 2 07:42:17.818441 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:42:17.818501 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:42:17.818560 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:42:17.818629 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:42:17.818749 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 07:42:17.819480 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:42:17.819552 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:42:17.819632 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:42:17.819641 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:42:17.819649 kernel: Initialise system trusted keyrings Jul 2 07:42:17.819655 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:42:17.819662 kernel: Key type asymmetric registered Jul 2 07:42:17.819672 kernel: Asymmetric key parser 'x509' registered Jul 2 07:42:17.819678 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:42:17.819685 kernel: io scheduler mq-deadline registered Jul 2 07:42:17.819692 kernel: io scheduler kyber registered Jul 2 07:42:17.819699 kernel: io scheduler bfq registered Jul 2 07:42:17.819706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:42:17.819713 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:42:17.819720 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:42:17.819726 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:42:17.819734 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:42:17.819741 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:42:17.819748 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:42:17.819754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:42:17.819761 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:42:17.819768 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:42:17.819841 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:42:17.819906 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:42:17.819972 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:42:17 UTC (1719906137) Jul 2 07:42:17.820034 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:42:17.820043 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:42:17.820050 kernel: Segment Routing with IPv6 Jul 2 07:42:17.820057 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:42:17.820064 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:42:17.820071 kernel: Key type dns_resolver registered Jul 2 07:42:17.820077 kernel: IPI shorthand broadcast: enabled Jul 2 07:42:17.820084 kernel: sched_clock: Marking stable (466001556, 97043673)->(574501346, -11456117) Jul 2 07:42:17.820093 kernel: registered taskstats version 1 Jul 2 07:42:17.820100 kernel: Loading compiled-in X.509 certificates Jul 2 07:42:17.820107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:42:17.820113 kernel: Key type .fscrypt registered Jul 2 07:42:17.820120 kernel: Key type fscrypt-provisioning registered Jul 2 07:42:17.820127 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:42:17.820134 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:42:17.820140 kernel: ima: No architecture policies found Jul 2 07:42:17.820147 kernel: clk: Disabling unused clocks Jul 2 07:42:17.820155 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:42:17.820162 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:42:17.820169 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:42:17.820176 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:42:17.820182 kernel: Run /init as init process Jul 2 07:42:17.820189 kernel: with arguments: Jul 2 07:42:17.820196 kernel: /init Jul 2 07:42:17.820211 kernel: with environment: Jul 2 07:42:17.820219 kernel: HOME=/ Jul 2 07:42:17.820226 kernel: TERM=linux Jul 2 07:42:17.820234 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:42:17.820243 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:42:17.820253 systemd[1]: Detected virtualization kvm. Jul 2 07:42:17.820261 systemd[1]: Detected architecture x86-64. Jul 2 07:42:17.820268 systemd[1]: Running in initrd. Jul 2 07:42:17.820275 systemd[1]: No hostname configured, using default hostname. Jul 2 07:42:17.820285 systemd[1]: Hostname set to . Jul 2 07:42:17.820294 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:42:17.820302 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:42:17.820312 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:42:17.820319 systemd[1]: Reached target cryptsetup.target. Jul 2 07:42:17.820326 systemd[1]: Reached target paths.target. Jul 2 07:42:17.820334 systemd[1]: Reached target slices.target. Jul 2 07:42:17.820341 systemd[1]: Reached target swap.target. Jul 2 07:42:17.820349 systemd[1]: Reached target timers.target. Jul 2 07:42:17.820357 systemd[1]: Listening on iscsid.socket. Jul 2 07:42:17.820377 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:42:17.820385 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:42:17.820393 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:42:17.820400 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:42:17.820408 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:42:17.820415 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:42:17.820424 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:42:17.820431 systemd[1]: Reached target sockets.target. Jul 2 07:42:17.820439 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:42:17.820446 systemd[1]: Finished network-cleanup.service. Jul 2 07:42:17.820454 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:42:17.820461 systemd[1]: Starting systemd-journald.service... Jul 2 07:42:17.820470 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:42:17.820478 systemd[1]: Starting systemd-resolved.service... Jul 2 07:42:17.820485 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:42:17.820493 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:42:17.820500 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:42:17.820508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:42:17.820515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:42:17.820526 systemd-journald[197]: Journal started Jul 2 07:42:17.820564 systemd-journald[197]: Runtime Journal (/run/log/journal/bd2655c43dfa46ee92d9fcecebfa536d) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:42:17.812159 systemd-modules-load[198]: Inserted module 'overlay' Jul 2 07:42:17.856706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:42:17.856730 kernel: Bridge firewalling registered Jul 2 07:42:17.856740 systemd[1]: Started systemd-journald.service. Jul 2 07:42:17.856753 kernel: audit: type=1130 audit(1719906137.850:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.833550 systemd-resolved[199]: Positive Trust Anchors: Jul 2 07:42:17.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.833558 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:42:17.864852 kernel: audit: type=1130 audit(1719906137.857:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.864866 kernel: audit: type=1130 audit(1719906137.861:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.833591 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:42:17.868218 kernel: audit: type=1130 audit(1719906137.864:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.835678 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 2 07:42:17.846837 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 2 07:42:17.857689 systemd[1]: Started systemd-resolved.service. Jul 2 07:42:17.861994 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:42:17.865136 systemd[1]: Reached target nss-lookup.target. Jul 2 07:42:17.869064 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:42:17.889248 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:42:17.893618 kernel: audit: type=1130 audit(1719906137.888:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.889967 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:42:17.898634 dracut-cmdline[215]: dracut-dracut-053 Jul 2 07:42:17.901030 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:42:17.906104 kernel: SCSI subsystem initialized Jul 2 07:42:17.918560 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:42:17.918616 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:42:17.918631 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:42:17.921215 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 2 07:42:17.922945 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:42:17.928732 kernel: audit: type=1130 audit(1719906137.923:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.924503 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:42:17.932956 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:42:17.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.937387 kernel: audit: type=1130 audit(1719906137.933:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:17.958388 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:42:17.974397 kernel: iscsi: registered transport (tcp) Jul 2 07:42:17.995388 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:42:17.995400 kernel: QLogic iSCSI HBA Driver Jul 2 07:42:18.021981 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:42:18.027037 kernel: audit: type=1130 audit(1719906138.022:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.023680 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:42:18.068386 kernel: raid6: avx2x4 gen() 31059 MB/s Jul 2 07:42:18.085382 kernel: raid6: avx2x4 xor() 8485 MB/s Jul 2 07:42:18.102381 kernel: raid6: avx2x2 gen() 32648 MB/s Jul 2 07:42:18.119380 kernel: raid6: avx2x2 xor() 19283 MB/s Jul 2 07:42:18.136383 kernel: raid6: avx2x1 gen() 26579 MB/s Jul 2 07:42:18.153396 kernel: raid6: avx2x1 xor() 15281 MB/s Jul 2 07:42:18.170394 kernel: raid6: sse2x4 gen() 14828 MB/s Jul 2 07:42:18.187390 kernel: raid6: sse2x4 xor() 7593 MB/s Jul 2 07:42:18.204390 kernel: raid6: sse2x2 gen() 16417 MB/s Jul 2 07:42:18.221386 kernel: raid6: sse2x2 xor() 9849 MB/s Jul 2 07:42:18.238386 kernel: raid6: sse2x1 gen() 12370 MB/s Jul 2 07:42:18.255790 kernel: raid6: sse2x1 xor() 7776 MB/s Jul 2 07:42:18.255811 kernel: raid6: using algorithm avx2x2 gen() 32648 MB/s Jul 2 07:42:18.255824 kernel: raid6: .... xor() 19283 MB/s, rmw enabled Jul 2 07:42:18.256521 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:42:18.268387 kernel: xor: automatically using best checksumming function avx Jul 2 07:42:18.356404 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:42:18.363850 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:42:18.368530 kernel: audit: type=1130 audit(1719906138.364:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.367000 audit: BPF prog-id=7 op=LOAD Jul 2 07:42:18.368000 audit: BPF prog-id=8 op=LOAD Jul 2 07:42:18.368829 systemd[1]: Starting systemd-udevd.service... Jul 2 07:42:18.380348 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 2 07:42:18.384032 systemd[1]: Started systemd-udevd.service. Jul 2 07:42:18.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.387158 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:42:18.394606 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jul 2 07:42:18.418087 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:42:18.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.420471 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:42:18.455329 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:42:18.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:18.485388 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:42:18.493659 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:42:18.500794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:42:18.500806 kernel: GPT:9289727 != 19775487 Jul 2 07:42:18.500814 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:42:18.500826 kernel: GPT:9289727 != 19775487 Jul 2 07:42:18.500834 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:42:18.500842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:42:18.510381 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:42:18.512395 kernel: AES CTR mode by8 optimization enabled Jul 2 07:42:18.512416 kernel: libata version 3.00 loaded. Jul 2 07:42:18.515381 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:42:18.517381 kernel: scsi host0: ata_piix Jul 2 07:42:18.517528 kernel: scsi host1: ata_piix Jul 2 07:42:18.521089 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:42:18.521111 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:42:18.526553 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:42:18.559301 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jul 2 07:42:18.563354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:42:18.566685 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:42:18.566742 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:42:18.580603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:42:18.582338 systemd[1]: Starting disk-uuid.service... Jul 2 07:42:18.592111 disk-uuid[515]: Primary Header is updated. Jul 2 07:42:18.592111 disk-uuid[515]: Secondary Entries is updated. Jul 2 07:42:18.592111 disk-uuid[515]: Secondary Header is updated. Jul 2 07:42:18.596387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:42:18.599383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:42:18.678449 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:42:18.680451 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:42:18.710614 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:42:18.710810 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:42:18.728412 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:42:19.600256 disk-uuid[516]: The operation has completed successfully. Jul 2 07:42:19.601576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:42:19.619378 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:42:19.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.619467 systemd[1]: Finished disk-uuid.service. Jul 2 07:42:19.631428 systemd[1]: Starting verity-setup.service... Jul 2 07:42:19.644378 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:42:19.661753 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:42:19.663809 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:42:19.665976 systemd[1]: Finished verity-setup.service. Jul 2 07:42:19.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.720398 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:42:19.720401 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:42:19.721231 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:42:19.721843 systemd[1]: Starting ignition-setup.service... Jul 2 07:42:19.723092 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:42:19.731101 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:42:19.731145 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:42:19.731155 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:42:19.738414 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:42:19.745819 systemd[1]: Finished ignition-setup.service. Jul 2 07:42:19.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.747295 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:42:19.781211 ignition[630]: Ignition 2.14.0 Jul 2 07:42:19.781221 ignition[630]: Stage: fetch-offline Jul 2 07:42:19.781289 ignition[630]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:19.781297 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:19.781397 ignition[630]: parsed url from cmdline: "" Jul 2 07:42:19.781401 ignition[630]: no config URL provided Jul 2 07:42:19.781407 ignition[630]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:42:19.786747 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:42:19.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.788000 audit: BPF prog-id=9 op=LOAD Jul 2 07:42:19.781412 ignition[630]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:42:19.781427 ignition[630]: op(1): [started] loading QEMU firmware config module Jul 2 07:42:19.789789 systemd[1]: Starting systemd-networkd.service... Jul 2 07:42:19.781431 ignition[630]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:42:19.784420 ignition[630]: op(1): [finished] loading QEMU firmware config module Jul 2 07:42:19.828682 ignition[630]: parsing config with SHA512: dc0f6f80bcfef4a5f1c43beccf916a3274791f59f9573c7341256dd25517925332363294195ce8368c15a48b6a6fb712c84e2ae4db9ab5bf367fe8f9e9f3e0b0 Jul 2 07:42:19.835005 unknown[630]: fetched base config from "system" Jul 2 07:42:19.835019 unknown[630]: fetched user config from "qemu" Jul 2 07:42:19.835575 ignition[630]: fetch-offline: fetch-offline passed Jul 2 07:42:19.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.836555 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:42:19.835632 ignition[630]: Ignition finished successfully Jul 2 07:42:19.845827 systemd-networkd[709]: lo: Link UP Jul 2 07:42:19.845836 systemd-networkd[709]: lo: Gained carrier Jul 2 07:42:19.846200 systemd-networkd[709]: Enumeration completed Jul 2 07:42:19.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.846269 systemd[1]: Started systemd-networkd.service. Jul 2 07:42:19.846393 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:42:19.847245 systemd-networkd[709]: eth0: Link UP Jul 2 07:42:19.847248 systemd-networkd[709]: eth0: Gained carrier Jul 2 07:42:19.848693 systemd[1]: Reached target network.target. Jul 2 07:42:19.850363 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:42:19.851003 systemd[1]: Starting ignition-kargs.service... Jul 2 07:42:19.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.859401 ignition[711]: Ignition 2.14.0 Jul 2 07:42:19.852408 systemd[1]: Starting iscsiuio.service... Jul 2 07:42:19.859406 ignition[711]: Stage: kargs Jul 2 07:42:19.855908 systemd[1]: Started iscsiuio.service. Jul 2 07:42:19.863182 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:42:19.863182 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:42:19.863182 iscsid[720]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:42:19.863182 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:42:19.863182 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:42:19.863182 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:42:19.863182 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:42:19.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.859487 ignition[711]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:19.860023 systemd[1]: Starting iscsid.service... Jul 2 07:42:19.859495 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:19.863653 systemd[1]: Started iscsid.service. Jul 2 07:42:19.863823 ignition[711]: kargs: kargs passed Jul 2 07:42:19.865574 systemd[1]: Finished ignition-kargs.service. Jul 2 07:42:19.863863 ignition[711]: Ignition finished successfully Jul 2 07:42:19.870060 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:42:19.872951 systemd[1]: Starting ignition-disks.service... Jul 2 07:42:19.874808 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:42:19.880740 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:42:19.885568 ignition[722]: Ignition 2.14.0 Jul 2 07:42:19.885577 ignition[722]: Stage: disks Jul 2 07:42:19.885663 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:19.885672 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:19.886705 ignition[722]: disks: disks passed Jul 2 07:42:19.886741 ignition[722]: Ignition finished successfully Jul 2 07:42:19.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.892212 systemd[1]: Finished ignition-disks.service. Jul 2 07:42:19.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.893883 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:42:19.895623 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:42:19.897244 systemd[1]: Reached target local-fs.target. Jul 2 07:42:19.898770 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:42:19.900399 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:42:19.902098 systemd[1]: Reached target remote-fs.target. Jul 2 07:42:19.903662 systemd[1]: Reached target sysinit.target. Jul 2 07:42:19.905119 systemd[1]: Reached target basic.target. Jul 2 07:42:19.907354 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:42:19.914055 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:42:19.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.916204 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:42:19.925609 systemd-fsck[742]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:42:19.930397 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:42:19.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.933218 systemd[1]: Mounting sysroot.mount... Jul 2 07:42:19.939385 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:42:19.939704 systemd[1]: Mounted sysroot.mount. Jul 2 07:42:19.939801 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:42:19.942761 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:42:19.943070 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:42:19.943104 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:42:19.943123 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:42:19.950874 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:42:19.952271 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:42:19.957590 initrd-setup-root[752]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:42:19.961325 initrd-setup-root[760]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:42:19.964821 initrd-setup-root[768]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:42:19.968314 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:42:19.991240 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:42:19.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:19.992792 systemd[1]: Starting ignition-mount.service... Jul 2 07:42:19.994153 systemd[1]: Starting sysroot-boot.service... Jul 2 07:42:19.997664 bash[793]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:42:20.005341 ignition[795]: INFO : Ignition 2.14.0 Jul 2 07:42:20.006387 ignition[795]: INFO : Stage: mount Jul 2 07:42:20.006387 ignition[795]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:20.006387 ignition[795]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:20.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:20.010289 ignition[795]: INFO : mount: mount passed Jul 2 07:42:20.010289 ignition[795]: INFO : Ignition finished successfully Jul 2 07:42:20.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:20.007047 systemd[1]: Finished ignition-mount.service. Jul 2 07:42:20.012050 systemd[1]: Finished sysroot-boot.service. Jul 2 07:42:20.671868 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:42:20.680397 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jul 2 07:42:20.680420 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:42:20.680429 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:42:20.682000 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:42:20.684745 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:42:20.686160 systemd[1]: Starting ignition-files.service... Jul 2 07:42:20.698384 ignition[823]: INFO : Ignition 2.14.0 Jul 2 07:42:20.698384 ignition[823]: INFO : Stage: files Jul 2 07:42:20.700265 ignition[823]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:20.700265 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:20.700265 ignition[823]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:42:20.700265 ignition[823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:42:20.700265 ignition[823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:42:20.707146 ignition[823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:42:20.707146 ignition[823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:42:20.707146 ignition[823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:42:20.707146 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:42:20.707146 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:42:20.702056 unknown[823]: wrote ssh authorized keys file for user: core Jul 2 07:42:20.733503 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:42:20.791598 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:42:20.793783 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:42:20.793783 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:42:21.235767 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:42:21.325231 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:42:21.325231 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:42:21.328951 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 07:42:21.700500 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:42:21.849524 systemd-networkd[709]: eth0: Gained IPv6LL Jul 2 07:42:22.061583 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:42:22.061583 ignition[823]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:42:22.066683 ignition[823]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:42:22.089226 ignition[823]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:42:22.089226 ignition[823]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:42:22.089226 ignition[823]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:42:22.089226 ignition[823]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:42:22.089226 ignition[823]: INFO : files: files passed Jul 2 07:42:22.089226 ignition[823]: INFO : Ignition finished successfully Jul 2 07:42:22.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.090030 systemd[1]: Finished ignition-files.service. Jul 2 07:42:22.092871 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:42:22.094676 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:42:22.108647 initrd-setup-root-after-ignition[849]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:42:22.095485 systemd[1]: Starting ignition-quench.service... Jul 2 07:42:22.112193 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:42:22.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.097784 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:42:22.097855 systemd[1]: Finished ignition-quench.service. Jul 2 07:42:22.100873 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:42:22.100963 systemd[1]: Reached target ignition-complete.target. Jul 2 07:42:22.101595 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:42:22.111059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:42:22.111120 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:42:22.112284 systemd[1]: Reached target initrd-fs.target. Jul 2 07:42:22.114539 systemd[1]: Reached target initrd.target. Jul 2 07:42:22.115295 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:42:22.115845 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:42:22.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.124557 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:42:22.126748 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:42:22.134780 systemd[1]: Stopped target network.target. Jul 2 07:42:22.135652 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:42:22.137130 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:42:22.138822 systemd[1]: Stopped target timers.target. Jul 2 07:42:22.140400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:42:22.147160 kernel: kauditd_printk_skb: 30 callbacks suppressed Jul 2 07:42:22.147182 kernel: audit: type=1131 audit(1719906142.141:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.140499 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:42:22.141981 systemd[1]: Stopped target initrd.target. Jul 2 07:42:22.147228 systemd[1]: Stopped target basic.target. Jul 2 07:42:22.148796 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:42:22.150377 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:42:22.151928 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:42:22.153650 systemd[1]: Stopped target remote-fs.target. Jul 2 07:42:22.155258 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:42:22.156935 systemd[1]: Stopped target sysinit.target. Jul 2 07:42:22.158446 systemd[1]: Stopped target local-fs.target. Jul 2 07:42:22.159992 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:42:22.161542 systemd[1]: Stopped target swap.target. Jul 2 07:42:22.168880 kernel: audit: type=1131 audit(1719906142.164:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.162974 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:42:22.163054 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:42:22.175123 kernel: audit: type=1131 audit(1719906142.170:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.164629 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:42:22.179549 kernel: audit: type=1131 audit(1719906142.174:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.168910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:42:22.168989 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:42:22.170763 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:42:22.170841 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:42:22.175223 systemd[1]: Stopped target paths.target. Jul 2 07:42:22.179574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:42:22.183403 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:42:22.184981 systemd[1]: Stopped target slices.target. Jul 2 07:42:22.186770 systemd[1]: Stopped target sockets.target. Jul 2 07:42:22.188394 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:42:22.188454 systemd[1]: Closed iscsid.socket. Jul 2 07:42:22.197388 kernel: audit: type=1131 audit(1719906142.192:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.189777 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:42:22.201826 kernel: audit: type=1131 audit(1719906142.197:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.189837 systemd[1]: Closed iscsiuio.socket. Jul 2 07:42:22.191199 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:42:22.191281 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:42:22.206196 ignition[864]: INFO : Ignition 2.14.0 Jul 2 07:42:22.206196 ignition[864]: INFO : Stage: umount Jul 2 07:42:22.206196 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:42:22.206196 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:42:22.206196 ignition[864]: INFO : umount: umount passed Jul 2 07:42:22.206196 ignition[864]: INFO : Ignition finished successfully Jul 2 07:42:22.224082 kernel: audit: type=1131 audit(1719906142.209:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.224136 kernel: audit: type=1131 audit(1719906142.214:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.224165 kernel: audit: type=1131 audit(1719906142.219:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.193013 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:42:22.193090 systemd[1]: Stopped ignition-files.service. Jul 2 07:42:22.198042 systemd[1]: Stopping ignition-mount.service... Jul 2 07:42:22.230746 kernel: audit: type=1131 audit(1719906142.226:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.202662 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:42:22.204726 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:42:22.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.206355 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:42:22.207812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:42:22.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.234000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:42:22.207929 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:42:22.209402 systemd-networkd[709]: eth0: DHCPv6 lease lost Jul 2 07:42:22.237000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:42:22.209621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:42:22.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.209725 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:42:22.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.216552 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:42:22.216627 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:42:22.225130 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:42:22.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.226125 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:42:22.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.226193 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:42:22.231119 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:42:22.231179 systemd[1]: Stopped ignition-mount.service. Jul 2 07:42:22.233299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:42:22.233358 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:42:22.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.236154 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:42:22.236182 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:42:22.237678 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:42:22.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.237706 systemd[1]: Stopped ignition-disks.service. Jul 2 07:42:22.239286 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:42:22.239315 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:42:22.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.241016 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:42:22.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.241042 systemd[1]: Stopped ignition-setup.service. Jul 2 07:42:22.243115 systemd[1]: Stopping network-cleanup.service... Jul 2 07:42:22.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.245155 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:42:22.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.245193 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:42:22.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.246972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:42:22.247004 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:42:22.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.248436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:42:22.248473 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:42:22.249783 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:42:22.252028 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:42:22.254306 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:42:22.254419 systemd[1]: Stopped network-cleanup.service. Jul 2 07:42:22.256852 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:42:22.256959 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:42:22.258594 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:42:22.258624 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:42:22.260984 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:42:22.261031 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:42:22.261970 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:42:22.262004 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:42:22.263733 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:42:22.263762 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:42:22.265316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:42:22.265343 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:42:22.267622 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:42:22.268706 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:42:22.268743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:42:22.270473 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:42:22.270503 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:42:22.272263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:42:22.272292 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:42:22.273788 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:42:22.274104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:42:22.274163 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:42:22.312188 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:42:22.312257 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:42:22.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.313933 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:42:22.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.315470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:42:22.315502 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:42:22.316054 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:42:22.331751 systemd[1]: Switching root. Jul 2 07:42:22.348878 iscsid[720]: iscsid shutting down. Jul 2 07:42:22.349623 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Jul 2 07:42:22.349651 systemd-journald[197]: Journal stopped Jul 2 07:42:25.032539 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:42:25.032587 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:42:25.032601 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:42:25.032615 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:42:25.032624 kernel: SELinux: policy capability open_perms=1 Jul 2 07:42:25.032633 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:42:25.032643 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:42:25.032652 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:42:25.032661 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:42:25.032670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:42:25.032681 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:42:25.032692 systemd[1]: Successfully loaded SELinux policy in 36.706ms. Jul 2 07:42:25.032714 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.938ms. Jul 2 07:42:25.032726 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:42:25.032736 systemd[1]: Detected virtualization kvm. Jul 2 07:42:25.032746 systemd[1]: Detected architecture x86-64. Jul 2 07:42:25.032756 systemd[1]: Detected first boot. Jul 2 07:42:25.032769 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:42:25.032778 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:42:25.032788 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:42:25.032798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:42:25.032809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:42:25.032823 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:42:25.032833 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:42:25.032843 systemd[1]: Stopped iscsiuio.service. Jul 2 07:42:25.032854 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:42:25.032864 systemd[1]: Stopped iscsid.service. Jul 2 07:42:25.032874 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:42:25.032884 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:42:25.032894 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:42:25.032904 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:42:25.032915 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:42:25.032925 systemd[1]: Created slice system-getty.slice. Jul 2 07:42:25.032935 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:42:25.032945 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:42:25.032955 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:42:25.032965 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:42:25.032975 systemd[1]: Created slice user.slice. Jul 2 07:42:25.032985 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:42:25.032995 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:42:25.033007 systemd[1]: Set up automount boot.automount. Jul 2 07:42:25.033018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:42:25.033028 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:42:25.033037 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:42:25.033048 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:42:25.033057 systemd[1]: Reached target integritysetup.target. Jul 2 07:42:25.033068 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:42:25.033078 systemd[1]: Reached target remote-fs.target. Jul 2 07:42:25.033091 systemd[1]: Reached target slices.target. Jul 2 07:42:25.033101 systemd[1]: Reached target swap.target. Jul 2 07:42:25.033111 systemd[1]: Reached target torcx.target. Jul 2 07:42:25.033121 systemd[1]: Reached target veritysetup.target. Jul 2 07:42:25.033133 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:42:25.033143 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:42:25.033153 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:42:25.033164 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:42:25.033173 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:42:25.033185 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:42:25.033195 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:42:25.033205 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:42:25.033215 systemd[1]: Mounting media.mount... Jul 2 07:42:25.033228 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:25.033239 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:42:25.033249 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:42:25.033259 systemd[1]: Mounting tmp.mount... Jul 2 07:42:25.033269 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:42:25.033280 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:42:25.033290 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:42:25.033300 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:42:25.033310 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:42:25.033321 systemd[1]: Starting modprobe@drm.service... Jul 2 07:42:25.033331 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:42:25.033342 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:42:25.033351 systemd[1]: Starting modprobe@loop.service... Jul 2 07:42:25.033372 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:42:25.033386 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:42:25.033403 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:42:25.033414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:42:25.033424 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:42:25.033434 kernel: fuse: init (API version 7.34) Jul 2 07:42:25.033443 systemd[1]: Stopped systemd-journald.service. Jul 2 07:42:25.033453 kernel: loop: module loaded Jul 2 07:42:25.033463 systemd[1]: Starting systemd-journald.service... Jul 2 07:42:25.033473 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:42:25.033485 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:42:25.033495 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:42:25.033505 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:42:25.033515 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:42:25.033526 systemd[1]: Stopped verity-setup.service. Jul 2 07:42:25.034242 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:25.034259 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:42:25.034272 systemd-journald[979]: Journal started Jul 2 07:42:25.034309 systemd-journald[979]: Runtime Journal (/run/log/journal/bd2655c43dfa46ee92d9fcecebfa536d) is 6.0M, max 48.5M, 42.5M free. Jul 2 07:42:22.403000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:42:22.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:42:22.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:42:22.830000 audit: BPF prog-id=10 op=LOAD Jul 2 07:42:22.830000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:42:22.830000 audit: BPF prog-id=11 op=LOAD Jul 2 07:42:22.830000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:42:22.858000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:42:22.858000 audit[898]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:42:22.858000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:42:22.860000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:42:22.860000 audit[898]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:42:22.860000 audit: CWD cwd="/" Jul 2 07:42:22.860000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:22.860000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:22.860000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:42:24.897000 audit: BPF prog-id=12 op=LOAD Jul 2 07:42:24.897000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:42:24.897000 audit: BPF prog-id=13 op=LOAD Jul 2 07:42:24.897000 audit: BPF prog-id=14 op=LOAD Jul 2 07:42:24.897000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:42:24.897000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:42:24.897000 audit: BPF prog-id=15 op=LOAD Jul 2 07:42:24.897000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:42:24.897000 audit: BPF prog-id=16 op=LOAD Jul 2 07:42:24.897000 audit: BPF prog-id=17 op=LOAD Jul 2 07:42:24.898000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:42:24.898000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:42:24.899000 audit: BPF prog-id=18 op=LOAD Jul 2 07:42:24.899000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:42:24.899000 audit: BPF prog-id=19 op=LOAD Jul 2 07:42:24.899000 audit: BPF prog-id=20 op=LOAD Jul 2 07:42:24.899000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:42:24.899000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:42:24.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:24.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:24.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.035098 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:42:24.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:24.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:24.910000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:42:25.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.011000 audit: BPF prog-id=21 op=LOAD Jul 2 07:42:25.011000 audit: BPF prog-id=22 op=LOAD Jul 2 07:42:25.011000 audit: BPF prog-id=23 op=LOAD Jul 2 07:42:25.011000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:42:25.011000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:42:25.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.031000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:42:25.031000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffeeba1f8b0 a2=4000 a3=7ffeeba1f94c items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:42:25.031000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:42:24.895925 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:42:22.857209 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:42:24.895936 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:42:22.857477 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:42:24.900191 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:42:22.857493 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:42:22.857520 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:42:22.857528 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:42:22.857553 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:42:25.037598 systemd[1]: Started systemd-journald.service. Jul 2 07:42:25.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:22.857563 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:42:22.857727 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:42:22.857755 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:42:25.037874 systemd[1]: Mounted media.mount. Jul 2 07:42:22.857765 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:42:22.858381 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:42:25.038674 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:42:22.858416 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:42:25.039588 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:42:22.858434 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:42:25.040528 systemd[1]: Mounted tmp.mount. Jul 2 07:42:22.858458 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:42:22.858477 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:42:22.858489 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:42:24.646281 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:42:24.646545 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:42:24.646637 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:42:25.041648 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:42:25.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:24.646779 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:42:24.646823 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:42:24.646874 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-07-02T07:42:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:42:25.042919 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:42:25.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.043973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:42:25.044184 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:42:25.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.045257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:42:25.045467 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:42:25.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.046552 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:42:25.046765 systemd[1]: Finished modprobe@drm.service. Jul 2 07:42:25.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.047773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:42:25.047973 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:42:25.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.049047 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:42:25.049262 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:42:25.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.050280 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:42:25.050596 systemd[1]: Finished modprobe@loop.service. Jul 2 07:42:25.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.051719 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:42:25.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.052893 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:42:25.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.054129 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:42:25.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.055430 systemd[1]: Reached target network-pre.target. Jul 2 07:42:25.057542 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:42:25.059343 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:42:25.060109 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:42:25.061592 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:42:25.063315 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:42:25.064247 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:42:25.065184 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:42:25.066030 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:42:25.067056 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:42:25.068819 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:42:25.070060 systemd-journald[979]: Time spent on flushing to /var/log/journal/bd2655c43dfa46ee92d9fcecebfa536d is 16.259ms for 1106 entries. Jul 2 07:42:25.070060 systemd-journald[979]: System Journal (/var/log/journal/bd2655c43dfa46ee92d9fcecebfa536d) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:42:25.100928 systemd-journald[979]: Received client request to flush runtime journal. Jul 2 07:42:25.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.072047 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:42:25.073135 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:42:25.103426 udevadm[1004]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:42:25.077015 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:42:25.078135 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:42:25.083172 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:42:25.084355 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:42:25.085423 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:42:25.087269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:42:25.089772 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:42:25.101837 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:42:25.109829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:42:25.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.481581 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:42:25.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.482000 audit: BPF prog-id=24 op=LOAD Jul 2 07:42:25.482000 audit: BPF prog-id=25 op=LOAD Jul 2 07:42:25.482000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:42:25.482000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:42:25.483778 systemd[1]: Starting systemd-udevd.service... Jul 2 07:42:25.498834 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Jul 2 07:42:25.510228 systemd[1]: Started systemd-udevd.service. Jul 2 07:42:25.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.511000 audit: BPF prog-id=26 op=LOAD Jul 2 07:42:25.513980 systemd[1]: Starting systemd-networkd.service... Jul 2 07:42:25.520000 audit: BPF prog-id=27 op=LOAD Jul 2 07:42:25.520000 audit: BPF prog-id=28 op=LOAD Jul 2 07:42:25.520000 audit: BPF prog-id=29 op=LOAD Jul 2 07:42:25.521305 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:42:25.537772 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:42:25.553588 systemd[1]: Started systemd-userdbd.service. Jul 2 07:42:25.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.559684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:42:25.565471 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:42:25.570407 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:42:25.579000 audit[1015]: AVC avc: denied { confidentiality } for pid=1015 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:42:25.579000 audit[1015]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5587568ef530 a1=3207c a2=7fcb11859bc5 a3=5 items=108 ppid=1006 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:42:25.579000 audit: CWD cwd="/" Jul 2 07:42:25.579000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=1 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=2 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=3 name=(null) inode=14703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=4 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=5 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=6 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=7 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=8 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=9 name=(null) inode=14706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=10 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=11 name=(null) inode=14707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=12 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=13 name=(null) inode=14708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=14 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=15 name=(null) inode=14709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=16 name=(null) inode=14705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=17 name=(null) inode=14710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=18 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=19 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=20 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=21 name=(null) inode=14712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=22 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=23 name=(null) inode=14713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=24 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=25 name=(null) inode=14714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=26 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=27 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=28 name=(null) inode=14711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=29 name=(null) inode=14716 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=30 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=31 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=32 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=33 name=(null) inode=14718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=34 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=35 name=(null) inode=14719 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=36 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=37 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=38 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=39 name=(null) inode=14721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=40 name=(null) inode=14717 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=41 name=(null) inode=14722 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=42 name=(null) inode=14702 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=43 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=44 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=45 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=46 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=47 name=(null) inode=14725 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=48 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=49 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=50 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=51 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=52 name=(null) inode=14723 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=53 name=(null) inode=14728 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=55 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=56 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=57 name=(null) inode=14730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=58 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=59 name=(null) inode=14731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=60 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=61 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=62 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=63 name=(null) inode=14733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=64 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=65 name=(null) inode=14734 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=66 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=67 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=68 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=69 name=(null) inode=14736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=70 name=(null) inode=14732 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=71 name=(null) inode=14737 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=72 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=73 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=74 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=75 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=76 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=77 name=(null) inode=14740 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=78 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=79 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=80 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=81 name=(null) inode=14742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=82 name=(null) inode=14738 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=83 name=(null) inode=14743 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=84 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=85 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=86 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=87 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=88 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=89 name=(null) inode=14746 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=90 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=91 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=92 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=93 name=(null) inode=14748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=94 name=(null) inode=14744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=95 name=(null) inode=14749 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=96 name=(null) inode=14729 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=97 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=98 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=99 name=(null) inode=14751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=100 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=101 name=(null) inode=14752 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=102 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=103 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=104 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=105 name=(null) inode=14754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=106 name=(null) inode=14750 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PATH item=107 name=(null) inode=14755 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:42:25.579000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:42:25.606584 systemd-networkd[1016]: lo: Link UP Jul 2 07:42:25.609550 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 07:42:25.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.606595 systemd-networkd[1016]: lo: Gained carrier Jul 2 07:42:25.607043 systemd-networkd[1016]: Enumeration completed Jul 2 07:42:25.607157 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:42:25.607536 systemd[1]: Started systemd-networkd.service. Jul 2 07:42:25.609708 systemd-networkd[1016]: eth0: Link UP Jul 2 07:42:25.609712 systemd-networkd[1016]: eth0: Gained carrier Jul 2 07:42:25.622390 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:42:25.625396 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:42:25.627477 systemd-networkd[1016]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:42:25.671403 kernel: kvm: Nested Virtualization enabled Jul 2 07:42:25.671499 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:42:25.671514 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:42:25.672657 kernel: SVM: Virtual GIF supported Jul 2 07:42:25.687389 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:42:25.704736 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:42:25.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.706792 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:42:25.714161 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:42:25.737001 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:42:25.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.737958 systemd[1]: Reached target cryptsetup.target. Jul 2 07:42:25.739592 systemd[1]: Starting lvm2-activation.service... Jul 2 07:42:25.742640 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:42:25.769155 systemd[1]: Finished lvm2-activation.service. Jul 2 07:42:25.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.770049 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:42:25.770925 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:42:25.770947 systemd[1]: Reached target local-fs.target. Jul 2 07:42:25.771731 systemd[1]: Reached target machines.target. Jul 2 07:42:25.773362 systemd[1]: Starting ldconfig.service... Jul 2 07:42:25.774297 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:42:25.774331 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:25.775122 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:42:25.776671 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:42:25.778532 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:42:25.780542 systemd[1]: Starting systemd-sysext.service... Jul 2 07:42:25.781616 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Jul 2 07:42:25.782407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:42:25.788676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:42:25.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.791647 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:42:25.794646 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:42:25.794759 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:42:25.805394 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 07:42:25.811504 systemd-fsck[1052]: fsck.fat 4.2 (2021-01-31) Jul 2 07:42:25.811504 systemd-fsck[1052]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 07:42:25.812990 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:42:25.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:25.816032 systemd[1]: Mounting boot.mount... Jul 2 07:42:25.834591 systemd[1]: Mounted boot.mount. Jul 2 07:42:26.028393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:42:26.040461 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:42:26.041175 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:42:26.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.042483 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:42:26.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.049386 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 07:42:26.052483 (sd-sysext)[1060]: Using extensions 'kubernetes'. Jul 2 07:42:26.052774 (sd-sysext)[1060]: Merged extensions into '/usr'. Jul 2 07:42:26.068906 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.070334 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:42:26.071255 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.072495 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:42:26.074089 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:42:26.076039 systemd[1]: Starting modprobe@loop.service... Jul 2 07:42:26.076914 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.077044 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:26.077141 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.079414 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:42:26.080453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:42:26.080557 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:42:26.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.081708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:42:26.081802 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:42:26.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.082984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:42:26.083082 systemd[1]: Finished modprobe@loop.service. Jul 2 07:42:26.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.084220 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:42:26.084315 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.085275 systemd[1]: Finished systemd-sysext.service. Jul 2 07:42:26.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.087024 systemd[1]: Starting ensure-sysext.service... Jul 2 07:42:26.088461 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:42:26.092683 systemd[1]: Reloading. Jul 2 07:42:26.093742 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:42:26.098264 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:42:26.098881 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:42:26.100197 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:42:26.145074 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-07-02T07:42:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:42:26.145098 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-07-02T07:42:26Z" level=info msg="torcx already run" Jul 2 07:42:26.218040 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:42:26.218057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:42:26.234455 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:42:26.283000 audit: BPF prog-id=30 op=LOAD Jul 2 07:42:26.283000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:42:26.284000 audit: BPF prog-id=31 op=LOAD Jul 2 07:42:26.284000 audit: BPF prog-id=32 op=LOAD Jul 2 07:42:26.284000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:42:26.284000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:42:26.284000 audit: BPF prog-id=33 op=LOAD Jul 2 07:42:26.284000 audit: BPF prog-id=34 op=LOAD Jul 2 07:42:26.284000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:42:26.284000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:42:26.284000 audit: BPF prog-id=35 op=LOAD Jul 2 07:42:26.284000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:42:26.286000 audit: BPF prog-id=36 op=LOAD Jul 2 07:42:26.286000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:42:26.286000 audit: BPF prog-id=37 op=LOAD Jul 2 07:42:26.286000 audit: BPF prog-id=38 op=LOAD Jul 2 07:42:26.286000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:42:26.286000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:42:26.289512 systemd[1]: Finished ldconfig.service. Jul 2 07:42:26.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.291431 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:42:26.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.294778 systemd[1]: Starting audit-rules.service... Jul 2 07:42:26.296530 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:42:26.298277 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:42:26.299000 audit: BPF prog-id=39 op=LOAD Jul 2 07:42:26.300995 systemd[1]: Starting systemd-resolved.service... Jul 2 07:42:26.302000 audit: BPF prog-id=40 op=LOAD Jul 2 07:42:26.303101 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:42:26.304891 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:42:26.306576 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:42:26.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.309000 audit[1140]: SYSTEM_BOOT pid=1140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.314311 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.314728 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.316461 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:42:26.319081 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:42:26.321097 systemd[1]: Starting modprobe@loop.service... Jul 2 07:42:26.322182 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.322317 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:26.322460 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:42:26.322629 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.323969 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:42:26.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.325778 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:42:26.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.327265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:42:26.327439 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:42:26.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.329028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:42:26.329146 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:42:26.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.330572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:42:26.330688 systemd[1]: Finished modprobe@loop.service. Jul 2 07:42:26.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:42:26.332000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:42:26.332000 audit[1152]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6af43880 a2=420 a3=0 items=0 ppid=1129 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:42:26.332000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:42:26.333409 augenrules[1152]: No rules Jul 2 07:42:26.334274 systemd[1]: Finished audit-rules.service. Jul 2 07:42:26.337589 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.337780 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.338974 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:42:26.340594 systemd[1]: Starting modprobe@drm.service... Jul 2 07:42:26.342313 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:42:26.344037 systemd[1]: Starting modprobe@loop.service... Jul 2 07:42:26.344957 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.345071 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:26.346009 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:42:26.348075 systemd[1]: Starting systemd-update-done.service... Jul 2 07:42:26.349013 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:42:26.349119 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:42:26.350344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:42:26.350550 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:42:26.352065 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:42:26.352161 systemd[1]: Finished modprobe@drm.service. Jul 2 07:42:26.353327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:42:26.353448 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:42:26.354676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:42:26.354774 systemd[1]: Finished modprobe@loop.service. Jul 2 07:42:26.841793 systemd-timesyncd[1137]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:42:26.841808 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:42:26.841835 systemd-timesyncd[1137]: Initial clock synchronization to Tue 2024-07-02 07:42:26.841705 UTC. Jul 2 07:42:26.843430 systemd[1]: Finished systemd-update-done.service. Jul 2 07:42:26.845187 systemd[1]: Reached target time-set.target. Jul 2 07:42:26.846155 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:42:26.846201 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.846520 systemd[1]: Finished ensure-sysext.service. Jul 2 07:42:26.856762 systemd-resolved[1136]: Positive Trust Anchors: Jul 2 07:42:26.856779 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:42:26.856812 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:42:26.863960 systemd-resolved[1136]: Defaulting to hostname 'linux'. Jul 2 07:42:26.865495 systemd[1]: Started systemd-resolved.service. Jul 2 07:42:26.866481 systemd[1]: Reached target network.target. Jul 2 07:42:26.867270 systemd[1]: Reached target nss-lookup.target. Jul 2 07:42:26.868092 systemd[1]: Reached target sysinit.target. Jul 2 07:42:26.868910 systemd[1]: Started motdgen.path. Jul 2 07:42:26.869610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:42:26.870776 systemd[1]: Started logrotate.timer. Jul 2 07:42:26.871563 systemd[1]: Started mdadm.timer. Jul 2 07:42:26.872236 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:42:26.873079 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:42:26.873103 systemd[1]: Reached target paths.target. Jul 2 07:42:26.873895 systemd[1]: Reached target timers.target. Jul 2 07:42:26.875040 systemd[1]: Listening on dbus.socket. Jul 2 07:42:26.876822 systemd[1]: Starting docker.socket... Jul 2 07:42:26.879682 systemd[1]: Listening on sshd.socket. Jul 2 07:42:26.880607 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:26.880933 systemd[1]: Listening on docker.socket. Jul 2 07:42:26.881868 systemd[1]: Reached target sockets.target. Jul 2 07:42:26.882660 systemd[1]: Reached target basic.target. Jul 2 07:42:26.883441 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.883461 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:42:26.884217 systemd[1]: Starting containerd.service... Jul 2 07:42:26.885694 systemd[1]: Starting dbus.service... Jul 2 07:42:26.887015 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:42:26.888659 systemd[1]: Starting extend-filesystems.service... Jul 2 07:42:26.890201 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:42:26.891295 jq[1169]: false Jul 2 07:42:26.891285 systemd[1]: Starting motdgen.service... Jul 2 07:42:26.892757 systemd[1]: Starting prepare-helm.service... Jul 2 07:42:26.894688 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:42:26.896406 systemd[1]: Starting sshd-keygen.service... Jul 2 07:42:26.899034 systemd[1]: Starting systemd-logind.service... Jul 2 07:42:26.899775 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:42:26.899820 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:42:26.900141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:42:26.900688 systemd[1]: Starting update-engine.service... Jul 2 07:42:26.902267 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:42:26.904815 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:42:26.904992 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:42:26.905850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:42:26.905976 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:42:26.910638 jq[1183]: true Jul 2 07:42:26.911167 tar[1188]: linux-amd64/helm Jul 2 07:42:26.915936 jq[1191]: true Jul 2 07:42:26.924947 dbus-daemon[1168]: [system] SELinux support is enabled Jul 2 07:42:26.925170 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:42:26.925312 systemd[1]: Finished motdgen.service. Jul 2 07:42:26.926400 systemd[1]: Started dbus.service. Jul 2 07:42:26.928870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:42:26.928894 systemd[1]: Reached target system-config.target. Jul 2 07:42:26.929908 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:42:26.929928 systemd[1]: Reached target user-config.target. Jul 2 07:42:26.933532 extend-filesystems[1170]: Found loop1 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found sr0 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda1 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda2 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda3 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found usr Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda4 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda6 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda7 Jul 2 07:42:26.933532 extend-filesystems[1170]: Found vda9 Jul 2 07:42:26.933532 extend-filesystems[1170]: Checking size of /dev/vda9 Jul 2 07:42:26.950213 env[1189]: time="2024-07-02T07:42:26.949204470Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:42:26.950394 update_engine[1181]: I0702 07:42:26.949537 1181 main.cc:92] Flatcar Update Engine starting Jul 2 07:42:26.954176 systemd[1]: Started update-engine.service. Jul 2 07:42:26.954420 update_engine[1181]: I0702 07:42:26.954391 1181 update_check_scheduler.cc:74] Next update check in 2m30s Jul 2 07:42:26.956470 systemd[1]: Started locksmithd.service. Jul 2 07:42:26.962414 systemd-logind[1178]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:42:26.962436 systemd-logind[1178]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:42:26.965148 systemd-logind[1178]: New seat seat0. Jul 2 07:42:26.966865 systemd[1]: Started systemd-logind.service. Jul 2 07:42:26.968924 extend-filesystems[1170]: Resized partition /dev/vda9 Jul 2 07:42:26.975655 extend-filesystems[1223]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:42:26.982247 env[1189]: time="2024-07-02T07:42:26.982206872Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:42:26.982342 env[1189]: time="2024-07-02T07:42:26.982324923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983427 env[1189]: time="2024-07-02T07:42:26.983395541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983427 env[1189]: time="2024-07-02T07:42:26.983422351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983612 env[1189]: time="2024-07-02T07:42:26.983584816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983612 env[1189]: time="2024-07-02T07:42:26.983604483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983684 env[1189]: time="2024-07-02T07:42:26.983614862Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:42:26.983684 env[1189]: time="2024-07-02T07:42:26.983624129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.983684 env[1189]: time="2024-07-02T07:42:26.983679453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.984236 env[1189]: time="2024-07-02T07:42:26.984211962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:42:26.984342 env[1189]: time="2024-07-02T07:42:26.984317650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:42:26.984342 env[1189]: time="2024-07-02T07:42:26.984337096Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:42:26.984403 env[1189]: time="2024-07-02T07:42:26.984375288Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:42:26.984403 env[1189]: time="2024-07-02T07:42:26.984386038Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:42:26.992096 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:42:26.993420 bash[1217]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:42:26.994542 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:42:27.013318 env[1189]: time="2024-07-02T07:42:27.013269786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:42:27.013318 env[1189]: time="2024-07-02T07:42:27.013321232Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013333946Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013369663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013382056Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013399880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013410650Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013425698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013437420Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013449202Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013463 env[1189]: time="2024-07-02T07:42:27.013461245Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.013630 env[1189]: time="2024-07-02T07:42:27.013472686Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:42:27.013630 env[1189]: time="2024-07-02T07:42:27.013578264Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:42:27.013668 env[1189]: time="2024-07-02T07:42:27.013637996Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:42:27.013976 env[1189]: time="2024-07-02T07:42:27.013926577Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:42:27.014025 env[1189]: time="2024-07-02T07:42:27.013992160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014025 env[1189]: time="2024-07-02T07:42:27.014009783Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:42:27.014122 env[1189]: time="2024-07-02T07:42:27.014092599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014164 env[1189]: time="2024-07-02T07:42:27.014120551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014164 env[1189]: time="2024-07-02T07:42:27.014146931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014203 env[1189]: time="2024-07-02T07:42:27.014162099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014203 env[1189]: time="2024-07-02T07:42:27.014178710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014203 env[1189]: time="2024-07-02T07:42:27.014194981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014266 env[1189]: time="2024-07-02T07:42:27.014209929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014266 env[1189]: time="2024-07-02T07:42:27.014225438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014266 env[1189]: time="2024-07-02T07:42:27.014245265Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:42:27.014445 env[1189]: time="2024-07-02T07:42:27.014418239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014488 env[1189]: time="2024-07-02T07:42:27.014445450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014488 env[1189]: time="2024-07-02T07:42:27.014461901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014488 env[1189]: time="2024-07-02T07:42:27.014477661Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:42:27.014546 env[1189]: time="2024-07-02T07:42:27.014498670Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:42:27.014546 env[1189]: time="2024-07-02T07:42:27.014514320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:42:27.014546 env[1189]: time="2024-07-02T07:42:27.014534878Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:42:27.014609 env[1189]: time="2024-07-02T07:42:27.014573090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:42:27.014865 env[1189]: time="2024-07-02T07:42:27.014798072Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:42:27.015415 env[1189]: time="2024-07-02T07:42:27.014867372Z" level=info msg="Connect containerd service" Jul 2 07:42:27.015415 env[1189]: time="2024-07-02T07:42:27.014907727Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:42:27.015522 env[1189]: time="2024-07-02T07:42:27.015489649Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:42:27.015752 env[1189]: time="2024-07-02T07:42:27.015726663Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:42:27.015795 env[1189]: time="2024-07-02T07:42:27.015781877Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:42:27.015856 env[1189]: time="2024-07-02T07:42:27.015832542Z" level=info msg="containerd successfully booted in 0.071245s" Jul 2 07:42:27.015902 systemd[1]: Started containerd.service. Jul 2 07:42:27.017125 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017107793Z" level=info msg="Start subscribing containerd event" Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017156114Z" level=info msg="Start recovering state" Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017217279Z" level=info msg="Start event monitor" Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017234982Z" level=info msg="Start snapshots syncer" Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017244900Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:42:27.037322 env[1189]: time="2024-07-02T07:42:27.017253977Z" level=info msg="Start streaming server" Jul 2 07:42:27.039080 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:42:27.040189 extend-filesystems[1223]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:42:27.040189 extend-filesystems[1223]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:42:27.040189 extend-filesystems[1223]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:42:27.043955 extend-filesystems[1170]: Resized filesystem in /dev/vda9 Jul 2 07:42:27.045303 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:42:27.045466 systemd[1]: Finished extend-filesystems.service. Jul 2 07:42:27.066266 systemd[1]: Created slice system-sshd.slice. Jul 2 07:42:27.311094 tar[1188]: linux-amd64/LICENSE Jul 2 07:42:27.311094 tar[1188]: linux-amd64/README.md Jul 2 07:42:27.314701 systemd[1]: Finished prepare-helm.service. Jul 2 07:42:28.026192 systemd-networkd[1016]: eth0: Gained IPv6LL Jul 2 07:42:28.027812 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:42:28.027879 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:42:28.029365 systemd[1]: Reached target network-online.target. Jul 2 07:42:28.031582 systemd[1]: Starting kubelet.service... Jul 2 07:42:28.047595 systemd[1]: Finished sshd-keygen.service. Jul 2 07:42:28.049683 systemd[1]: Starting issuegen.service... Jul 2 07:42:28.051245 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:35424.service. Jul 2 07:42:28.055211 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:42:28.055322 systemd[1]: Finished issuegen.service. Jul 2 07:42:28.057212 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:42:28.062048 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:42:28.064843 systemd[1]: Started getty@tty1.service. Jul 2 07:42:28.067344 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:42:28.068679 systemd[1]: Reached target getty.target. Jul 2 07:42:28.098651 sshd[1243]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:28.100220 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:28.109150 systemd-logind[1178]: New session 1 of user core. Jul 2 07:42:28.110225 systemd[1]: Created slice user-500.slice. Jul 2 07:42:28.112364 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:42:28.121946 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:42:28.124278 systemd[1]: Starting user@500.service... Jul 2 07:42:28.127016 (systemd)[1251]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:28.191896 systemd[1251]: Queued start job for default target default.target. Jul 2 07:42:28.192417 systemd[1251]: Reached target paths.target. Jul 2 07:42:28.192441 systemd[1251]: Reached target sockets.target. Jul 2 07:42:28.192456 systemd[1251]: Reached target timers.target. Jul 2 07:42:28.192469 systemd[1251]: Reached target basic.target. Jul 2 07:42:28.192508 systemd[1251]: Reached target default.target. Jul 2 07:42:28.192536 systemd[1251]: Startup finished in 60ms. Jul 2 07:42:28.192986 systemd[1]: Started user@500.service. Jul 2 07:42:28.194690 systemd[1]: Started session-1.scope. Jul 2 07:42:28.245580 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:35430.service. Jul 2 07:42:28.287183 sshd[1260]: Accepted publickey for core from 10.0.0.1 port 35430 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:28.288321 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:28.292606 systemd-logind[1178]: New session 2 of user core. Jul 2 07:42:28.293596 systemd[1]: Started session-2.scope. Jul 2 07:42:28.348007 sshd[1260]: pam_unix(sshd:session): session closed for user core Jul 2 07:42:28.350837 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:35446.service. Jul 2 07:42:28.352453 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:35430.service: Deactivated successfully. Jul 2 07:42:28.352949 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:42:28.353667 systemd-logind[1178]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:42:28.354685 systemd-logind[1178]: Removed session 2. Jul 2 07:42:28.391811 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 35446 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:28.392931 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:28.396693 systemd-logind[1178]: New session 3 of user core. Jul 2 07:42:28.397664 systemd[1]: Started session-3.scope. Jul 2 07:42:28.453117 sshd[1265]: pam_unix(sshd:session): session closed for user core Jul 2 07:42:28.455476 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:35446.service: Deactivated successfully. Jul 2 07:42:28.456101 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:42:28.456658 systemd-logind[1178]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:42:28.457470 systemd-logind[1178]: Removed session 3. Jul 2 07:42:28.600714 systemd[1]: Started kubelet.service. Jul 2 07:42:28.602084 systemd[1]: Reached target multi-user.target. Jul 2 07:42:28.604218 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:42:28.611132 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:42:28.611281 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:42:28.612460 systemd[1]: Startup finished in 640ms (kernel) + 4.684s (initrd) + 5.761s (userspace) = 11.086s. Jul 2 07:42:29.026190 kubelet[1273]: E0702 07:42:29.026014 1273 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:42:29.027536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:42:29.027649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:42:38.456845 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:41938.service. Jul 2 07:42:38.495086 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:38.495905 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:38.499026 systemd-logind[1178]: New session 4 of user core. Jul 2 07:42:38.499838 systemd[1]: Started session-4.scope. Jul 2 07:42:38.551039 sshd[1283]: pam_unix(sshd:session): session closed for user core Jul 2 07:42:38.553437 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:41938.service: Deactivated successfully. Jul 2 07:42:38.553858 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:42:38.554287 systemd-logind[1178]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:42:38.555026 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:41948.service. Jul 2 07:42:38.555817 systemd-logind[1178]: Removed session 4. Jul 2 07:42:38.593064 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 41948 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:38.593882 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:38.596847 systemd-logind[1178]: New session 5 of user core. Jul 2 07:42:38.597638 systemd[1]: Started session-5.scope. Jul 2 07:42:38.644870 sshd[1289]: pam_unix(sshd:session): session closed for user core Jul 2 07:42:38.647448 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:41948.service: Deactivated successfully. Jul 2 07:42:38.648018 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:42:38.648578 systemd-logind[1178]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:42:38.649428 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:41956.service. Jul 2 07:42:38.650164 systemd-logind[1178]: Removed session 5. Jul 2 07:42:38.687166 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 41956 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:38.688208 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:38.691015 systemd-logind[1178]: New session 6 of user core. Jul 2 07:42:38.691653 systemd[1]: Started session-6.scope. Jul 2 07:42:38.744350 sshd[1295]: pam_unix(sshd:session): session closed for user core Jul 2 07:42:38.747052 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:41956.service: Deactivated successfully. Jul 2 07:42:38.747586 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:42:38.748018 systemd-logind[1178]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:42:38.748885 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:41970.service. Jul 2 07:42:38.749520 systemd-logind[1178]: Removed session 6. Jul 2 07:42:38.788294 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 41970 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:42:38.789146 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:42:38.792233 systemd-logind[1178]: New session 7 of user core. Jul 2 07:42:38.792886 systemd[1]: Started session-7.scope. Jul 2 07:42:38.845998 sudo[1304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:42:38.846187 sudo[1304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:42:38.867032 systemd[1]: Starting docker.service... Jul 2 07:42:38.900674 env[1317]: time="2024-07-02T07:42:38.900614049Z" level=info msg="Starting up" Jul 2 07:42:38.901559 env[1317]: time="2024-07-02T07:42:38.901541729Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:42:38.901559 env[1317]: time="2024-07-02T07:42:38.901555635Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:42:38.901644 env[1317]: time="2024-07-02T07:42:38.901573449Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:42:38.901644 env[1317]: time="2024-07-02T07:42:38.901582696Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:42:38.903260 env[1317]: time="2024-07-02T07:42:38.903219025Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:42:38.903260 env[1317]: time="2024-07-02T07:42:38.903245154Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:42:38.903260 env[1317]: time="2024-07-02T07:42:38.903263358Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:42:38.903436 env[1317]: time="2024-07-02T07:42:38.903272725Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:42:38.955817 env[1317]: time="2024-07-02T07:42:38.955764757Z" level=info msg="Loading containers: start." Jul 2 07:42:39.046359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:42:39.046601 systemd[1]: Stopped kubelet.service. Jul 2 07:42:39.047703 systemd[1]: Starting kubelet.service... Jul 2 07:42:39.211091 systemd[1]: Started kubelet.service. Jul 2 07:42:39.242443 kubelet[1369]: E0702 07:42:39.242388 1369 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:42:39.244985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:42:39.245107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:42:39.738101 kernel: Initializing XFRM netlink socket Jul 2 07:42:39.767136 env[1317]: time="2024-07-02T07:42:39.767058091Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:42:39.814841 systemd-networkd[1016]: docker0: Link UP Jul 2 07:42:39.824408 env[1317]: time="2024-07-02T07:42:39.824380922Z" level=info msg="Loading containers: done." Jul 2 07:42:39.832432 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1953176826-merged.mount: Deactivated successfully. Jul 2 07:42:39.835552 env[1317]: time="2024-07-02T07:42:39.835517607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:42:39.835684 env[1317]: time="2024-07-02T07:42:39.835663330Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:42:39.835762 env[1317]: time="2024-07-02T07:42:39.835747017Z" level=info msg="Daemon has completed initialization" Jul 2 07:42:39.850676 systemd[1]: Started docker.service. Jul 2 07:42:39.856951 env[1317]: time="2024-07-02T07:42:39.856899396Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:42:40.319334 env[1189]: time="2024-07-02T07:42:40.319284863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 07:42:40.947925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444697502.mount: Deactivated successfully. Jul 2 07:42:42.487434 env[1189]: time="2024-07-02T07:42:42.487361271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:42.489172 env[1189]: time="2024-07-02T07:42:42.489126651Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:42.490759 env[1189]: time="2024-07-02T07:42:42.490726752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:42.492452 env[1189]: time="2024-07-02T07:42:42.492410009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:42.493091 env[1189]: time="2024-07-02T07:42:42.493034039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 07:42:42.502191 env[1189]: time="2024-07-02T07:42:42.502156727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 07:42:44.786920 env[1189]: time="2024-07-02T07:42:44.786848881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:44.788634 env[1189]: time="2024-07-02T07:42:44.788597560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:44.790430 env[1189]: time="2024-07-02T07:42:44.790399259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:44.791970 env[1189]: time="2024-07-02T07:42:44.791942713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:44.792654 env[1189]: time="2024-07-02T07:42:44.792621607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 07:42:44.801822 env[1189]: time="2024-07-02T07:42:44.801788598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 07:42:46.826682 env[1189]: time="2024-07-02T07:42:46.826624588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:46.828767 env[1189]: time="2024-07-02T07:42:46.828728975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:46.832759 env[1189]: time="2024-07-02T07:42:46.832730219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:46.834326 env[1189]: time="2024-07-02T07:42:46.834292388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:46.835041 env[1189]: time="2024-07-02T07:42:46.835008882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 07:42:46.843329 env[1189]: time="2024-07-02T07:42:46.843291585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 07:42:48.027121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871905570.mount: Deactivated successfully. Jul 2 07:42:49.296492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:42:49.296672 systemd[1]: Stopped kubelet.service. Jul 2 07:42:49.297882 systemd[1]: Starting kubelet.service... Jul 2 07:42:49.405037 systemd[1]: Started kubelet.service. Jul 2 07:42:49.668286 kubelet[1492]: E0702 07:42:49.668155 1492 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:42:49.670137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:42:49.670248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:42:49.734029 env[1189]: time="2024-07-02T07:42:49.733969297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:49.739300 env[1189]: time="2024-07-02T07:42:49.737545103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:49.740099 env[1189]: time="2024-07-02T07:42:49.740055000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:49.740877 env[1189]: time="2024-07-02T07:42:49.740836776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:49.741339 env[1189]: time="2024-07-02T07:42:49.741307900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 07:42:49.750671 env[1189]: time="2024-07-02T07:42:49.750622749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:42:50.290550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767734647.mount: Deactivated successfully. Jul 2 07:42:51.276449 env[1189]: time="2024-07-02T07:42:51.276384897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.278377 env[1189]: time="2024-07-02T07:42:51.278346195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.280125 env[1189]: time="2024-07-02T07:42:51.280095987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.281641 env[1189]: time="2024-07-02T07:42:51.281606700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.282339 env[1189]: time="2024-07-02T07:42:51.282312834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:42:51.290446 env[1189]: time="2024-07-02T07:42:51.290371888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:42:51.789482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871183765.mount: Deactivated successfully. Jul 2 07:42:51.794164 env[1189]: time="2024-07-02T07:42:51.794131056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.795874 env[1189]: time="2024-07-02T07:42:51.795846613Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.797127 env[1189]: time="2024-07-02T07:42:51.797103170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.798403 env[1189]: time="2024-07-02T07:42:51.798360848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:51.798766 env[1189]: time="2024-07-02T07:42:51.798736283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:42:51.805943 env[1189]: time="2024-07-02T07:42:51.805910306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 07:42:52.329668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834023451.mount: Deactivated successfully. Jul 2 07:42:55.400909 env[1189]: time="2024-07-02T07:42:55.400852366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:55.403028 env[1189]: time="2024-07-02T07:42:55.403000204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:55.404778 env[1189]: time="2024-07-02T07:42:55.404756608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:55.408432 env[1189]: time="2024-07-02T07:42:55.408405521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:55.409477 env[1189]: time="2024-07-02T07:42:55.409444048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 07:42:57.706326 systemd[1]: Stopped kubelet.service. Jul 2 07:42:57.708161 systemd[1]: Starting kubelet.service... Jul 2 07:42:57.722118 systemd[1]: Reloading. Jul 2 07:42:57.788330 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2024-07-02T07:42:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:42:57.788711 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2024-07-02T07:42:57Z" level=info msg="torcx already run" Jul 2 07:42:58.016901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:42:58.016916 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:42:58.033130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:42:58.105223 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:42:58.105360 systemd[1]: Stopped kubelet.service. Jul 2 07:42:58.106544 systemd[1]: Starting kubelet.service... Jul 2 07:42:58.175082 systemd[1]: Started kubelet.service. Jul 2 07:42:58.209449 kubelet[1668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:42:58.209449 kubelet[1668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:42:58.209449 kubelet[1668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:42:58.209954 kubelet[1668]: I0702 07:42:58.209476 1668 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:42:58.458000 kubelet[1668]: I0702 07:42:58.457952 1668 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:42:58.458000 kubelet[1668]: I0702 07:42:58.457977 1668 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:42:58.458209 kubelet[1668]: I0702 07:42:58.458177 1668 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:42:58.473023 kubelet[1668]: I0702 07:42:58.472986 1668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:42:58.473384 kubelet[1668]: E0702 07:42:58.473359 1668 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.484153 kubelet[1668]: I0702 07:42:58.484057 1668 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:42:58.485569 kubelet[1668]: I0702 07:42:58.485387 1668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:42:58.485733 kubelet[1668]: I0702 07:42:58.485562 1668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:42:58.485830 kubelet[1668]: I0702 07:42:58.485735 1668 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:42:58.485830 kubelet[1668]: I0702 07:42:58.485743 1668 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:42:58.485889 kubelet[1668]: I0702 07:42:58.485838 1668 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:42:58.486414 kubelet[1668]: I0702 07:42:58.486397 1668 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:42:58.486414 kubelet[1668]: I0702 07:42:58.486411 1668 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:42:58.486491 kubelet[1668]: I0702 07:42:58.486437 1668 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:42:58.486491 kubelet[1668]: I0702 07:42:58.486451 1668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:42:58.486973 kubelet[1668]: W0702 07:42:58.486925 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.487032 kubelet[1668]: E0702 07:42:58.486980 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.493170 kubelet[1668]: W0702 07:42:58.493132 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.493240 kubelet[1668]: E0702 07:42:58.493190 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.495798 kubelet[1668]: I0702 07:42:58.495783 1668 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:42:58.498380 kubelet[1668]: I0702 07:42:58.498364 1668 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:42:58.498435 kubelet[1668]: W0702 07:42:58.498404 1668 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:42:58.498816 kubelet[1668]: I0702 07:42:58.498804 1668 server.go:1264] "Started kubelet" Jul 2 07:42:58.498889 kubelet[1668]: I0702 07:42:58.498871 1668 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:42:58.499030 kubelet[1668]: I0702 07:42:58.498975 1668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:42:58.499559 kubelet[1668]: I0702 07:42:58.499228 1668 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:42:58.499938 kubelet[1668]: I0702 07:42:58.499927 1668 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:42:58.502788 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:42:58.502868 kubelet[1668]: I0702 07:42:58.502851 1668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:42:58.507519 kubelet[1668]: I0702 07:42:58.507501 1668 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:42:58.509121 kubelet[1668]: E0702 07:42:58.509100 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Jul 2 07:42:58.509597 kubelet[1668]: I0702 07:42:58.509573 1668 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:42:58.509670 kubelet[1668]: I0702 07:42:58.509651 1668 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:42:58.510268 kubelet[1668]: W0702 07:42:58.510230 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.510268 kubelet[1668]: E0702 07:42:58.510264 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.510436 kubelet[1668]: I0702 07:42:58.510361 1668 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:42:58.510436 kubelet[1668]: I0702 07:42:58.510369 1668 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:42:58.510436 kubelet[1668]: I0702 07:42:58.510416 1668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:42:58.514547 kubelet[1668]: E0702 07:42:58.514459 1668 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de558e906b7d40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:42:58.498788672 +0000 UTC m=+0.320989420,LastTimestamp:2024-07-02 07:42:58.498788672 +0000 UTC m=+0.320989420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:42:58.515859 kubelet[1668]: I0702 07:42:58.515612 1668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:42:58.516353 kubelet[1668]: I0702 07:42:58.516325 1668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:42:58.516353 kubelet[1668]: I0702 07:42:58.516356 1668 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:42:58.516432 kubelet[1668]: I0702 07:42:58.516370 1668 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:42:58.516432 kubelet[1668]: E0702 07:42:58.516404 1668 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:42:58.520966 kubelet[1668]: W0702 07:42:58.520917 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.520966 kubelet[1668]: E0702 07:42:58.520959 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:58.521788 kubelet[1668]: I0702 07:42:58.521760 1668 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:42:58.521788 kubelet[1668]: I0702 07:42:58.521772 1668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:42:58.521788 kubelet[1668]: I0702 07:42:58.521784 1668 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:42:58.608692 kubelet[1668]: I0702 07:42:58.608664 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:42:58.609219 kubelet[1668]: E0702 07:42:58.609186 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 2 07:42:58.616496 kubelet[1668]: E0702 07:42:58.616457 1668 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:42:58.709982 kubelet[1668]: E0702 07:42:58.709886 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Jul 2 07:42:58.753541 kubelet[1668]: I0702 07:42:58.753514 1668 policy_none.go:49] "None policy: Start" Jul 2 07:42:58.754154 kubelet[1668]: I0702 07:42:58.754140 1668 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:42:58.754225 kubelet[1668]: I0702 07:42:58.754171 1668 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:42:58.760346 systemd[1]: Created slice kubepods.slice. Jul 2 07:42:58.763737 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:42:58.772655 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:42:58.773430 kubelet[1668]: I0702 07:42:58.773410 1668 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:42:58.773541 kubelet[1668]: I0702 07:42:58.773514 1668 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:42:58.773615 kubelet[1668]: I0702 07:42:58.773608 1668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:42:58.774508 kubelet[1668]: E0702 07:42:58.774494 1668 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:42:58.810357 kubelet[1668]: I0702 07:42:58.810344 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:42:58.810595 kubelet[1668]: E0702 07:42:58.810576 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 2 07:42:58.816876 kubelet[1668]: I0702 07:42:58.816846 1668 topology_manager.go:215] "Topology Admit Handler" podUID="0378920adcd1fffd4d8772ace29d2c08" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:42:58.817603 kubelet[1668]: I0702 07:42:58.817586 1668 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:42:58.818093 kubelet[1668]: I0702 07:42:58.818061 1668 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:42:58.821753 systemd[1]: Created slice kubepods-burstable-pod0378920adcd1fffd4d8772ace29d2c08.slice. Jul 2 07:42:58.828364 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 07:42:58.841468 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 07:42:58.912861 kubelet[1668]: I0702 07:42:58.912821 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:42:58.912861 kubelet[1668]: I0702 07:42:58.912851 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:42:58.912978 kubelet[1668]: I0702 07:42:58.912871 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:42:58.912978 kubelet[1668]: I0702 07:42:58.912888 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:42:58.912978 kubelet[1668]: I0702 07:42:58.912909 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:42:58.912978 kubelet[1668]: I0702 07:42:58.912922 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:42:58.912978 kubelet[1668]: I0702 07:42:58.912936 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:42:58.913104 kubelet[1668]: I0702 07:42:58.912950 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:42:58.913104 kubelet[1668]: I0702 07:42:58.912966 1668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:42:59.110990 kubelet[1668]: E0702 07:42:59.110950 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Jul 2 07:42:59.127285 kubelet[1668]: E0702 07:42:59.127254 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.127800 env[1189]: time="2024-07-02T07:42:59.127770648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0378920adcd1fffd4d8772ace29d2c08,Namespace:kube-system,Attempt:0,}" Jul 2 07:42:59.140966 kubelet[1668]: E0702 07:42:59.140940 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.141295 env[1189]: time="2024-07-02T07:42:59.141263211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 07:42:59.143480 kubelet[1668]: E0702 07:42:59.143438 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.144202 env[1189]: time="2024-07-02T07:42:59.144150957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 07:42:59.212197 kubelet[1668]: I0702 07:42:59.212151 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:42:59.212522 kubelet[1668]: E0702 07:42:59.212401 1668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 2 07:42:59.560184 kubelet[1668]: W0702 07:42:59.560106 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.560184 kubelet[1668]: E0702 07:42:59.560182 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.657728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945520847.mount: Deactivated successfully. Jul 2 07:42:59.663748 env[1189]: time="2024-07-02T07:42:59.663709525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.666252 env[1189]: time="2024-07-02T07:42:59.666224401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.667836 env[1189]: time="2024-07-02T07:42:59.667783214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.668655 env[1189]: time="2024-07-02T07:42:59.668611538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.670240 env[1189]: time="2024-07-02T07:42:59.670208302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.671330 env[1189]: time="2024-07-02T07:42:59.671302885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.672997 env[1189]: time="2024-07-02T07:42:59.672972626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.674723 env[1189]: time="2024-07-02T07:42:59.674684467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.675826 env[1189]: time="2024-07-02T07:42:59.675797885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.676382 env[1189]: time="2024-07-02T07:42:59.676353487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.677628 env[1189]: time="2024-07-02T07:42:59.677597520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.678810 env[1189]: time="2024-07-02T07:42:59.678784766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:42:59.700629 env[1189]: time="2024-07-02T07:42:59.700542700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:42:59.700629 env[1189]: time="2024-07-02T07:42:59.700574349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:42:59.700629 env[1189]: time="2024-07-02T07:42:59.700583567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:42:59.700975 env[1189]: time="2024-07-02T07:42:59.700735311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2d84cd873faf1cbe9d11a0a9abb71c0edd4a51f0c2f2f2e087854fb6aba93e1 pid=1715 runtime=io.containerd.runc.v2 Jul 2 07:42:59.704239 env[1189]: time="2024-07-02T07:42:59.703899085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:42:59.704239 env[1189]: time="2024-07-02T07:42:59.703932197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:42:59.704239 env[1189]: time="2024-07-02T07:42:59.703943418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:42:59.704239 env[1189]: time="2024-07-02T07:42:59.704092778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97af1e3e20f875a75c08f6270e4167d6251c6a75bf6525e57c056766765201fc pid=1722 runtime=io.containerd.runc.v2 Jul 2 07:42:59.709474 env[1189]: time="2024-07-02T07:42:59.709415931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:42:59.709625 env[1189]: time="2024-07-02T07:42:59.709449754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:42:59.709625 env[1189]: time="2024-07-02T07:42:59.709459192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:42:59.709815 env[1189]: time="2024-07-02T07:42:59.709652644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/533e7038cd93b748e8a425e2788a8a5c651457f9933001cd206275632e92a551 pid=1756 runtime=io.containerd.runc.v2 Jul 2 07:42:59.714094 systemd[1]: Started cri-containerd-97af1e3e20f875a75c08f6270e4167d6251c6a75bf6525e57c056766765201fc.scope. Jul 2 07:42:59.722879 systemd[1]: Started cri-containerd-b2d84cd873faf1cbe9d11a0a9abb71c0edd4a51f0c2f2f2e087854fb6aba93e1.scope. Jul 2 07:42:59.737526 systemd[1]: Started cri-containerd-533e7038cd93b748e8a425e2788a8a5c651457f9933001cd206275632e92a551.scope. Jul 2 07:42:59.755410 env[1189]: time="2024-07-02T07:42:59.755353780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"97af1e3e20f875a75c08f6270e4167d6251c6a75bf6525e57c056766765201fc\"" Jul 2 07:42:59.756451 kubelet[1668]: E0702 07:42:59.756427 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.758293 env[1189]: time="2024-07-02T07:42:59.758256594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2d84cd873faf1cbe9d11a0a9abb71c0edd4a51f0c2f2f2e087854fb6aba93e1\"" Jul 2 07:42:59.760242 kubelet[1668]: E0702 07:42:59.760220 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.760393 env[1189]: time="2024-07-02T07:42:59.760350180Z" level=info msg="CreateContainer within sandbox \"97af1e3e20f875a75c08f6270e4167d6251c6a75bf6525e57c056766765201fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:42:59.761949 env[1189]: time="2024-07-02T07:42:59.761929582Z" level=info msg="CreateContainer within sandbox \"b2d84cd873faf1cbe9d11a0a9abb71c0edd4a51f0c2f2f2e087854fb6aba93e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:42:59.772964 env[1189]: time="2024-07-02T07:42:59.772898613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0378920adcd1fffd4d8772ace29d2c08,Namespace:kube-system,Attempt:0,} returns sandbox id \"533e7038cd93b748e8a425e2788a8a5c651457f9933001cd206275632e92a551\"" Jul 2 07:42:59.773409 kubelet[1668]: E0702 07:42:59.773385 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:42:59.775327 env[1189]: time="2024-07-02T07:42:59.775292372Z" level=info msg="CreateContainer within sandbox \"533e7038cd93b748e8a425e2788a8a5c651457f9933001cd206275632e92a551\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:42:59.782343 env[1189]: time="2024-07-02T07:42:59.782309813Z" level=info msg="CreateContainer within sandbox \"b2d84cd873faf1cbe9d11a0a9abb71c0edd4a51f0c2f2f2e087854fb6aba93e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c1a5603f5611b45c1739d4a3d30c6188568707b8a7621301765b3415ad98144d\"" Jul 2 07:42:59.783861 env[1189]: time="2024-07-02T07:42:59.783836095Z" level=info msg="CreateContainer within sandbox \"97af1e3e20f875a75c08f6270e4167d6251c6a75bf6525e57c056766765201fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ccc00f476afafb7bc889d17f10723747970ff80414dac098bc6db2691435d3b\"" Jul 2 07:42:59.783984 env[1189]: time="2024-07-02T07:42:59.783960729Z" level=info msg="StartContainer for \"c1a5603f5611b45c1739d4a3d30c6188568707b8a7621301765b3415ad98144d\"" Jul 2 07:42:59.789346 env[1189]: time="2024-07-02T07:42:59.789317444Z" level=info msg="StartContainer for \"1ccc00f476afafb7bc889d17f10723747970ff80414dac098bc6db2691435d3b\"" Jul 2 07:42:59.793632 env[1189]: time="2024-07-02T07:42:59.793598403Z" level=info msg="CreateContainer within sandbox \"533e7038cd93b748e8a425e2788a8a5c651457f9933001cd206275632e92a551\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a60c44669e3e5ed91e31f02d63e6c4daca3b443b0cce37086e10cbe436494709\"" Jul 2 07:42:59.794042 env[1189]: time="2024-07-02T07:42:59.794001078Z" level=info msg="StartContainer for \"a60c44669e3e5ed91e31f02d63e6c4daca3b443b0cce37086e10cbe436494709\"" Jul 2 07:42:59.800092 systemd[1]: Started cri-containerd-c1a5603f5611b45c1739d4a3d30c6188568707b8a7621301765b3415ad98144d.scope. Jul 2 07:42:59.804784 systemd[1]: Started cri-containerd-1ccc00f476afafb7bc889d17f10723747970ff80414dac098bc6db2691435d3b.scope. Jul 2 07:42:59.818345 systemd[1]: Started cri-containerd-a60c44669e3e5ed91e31f02d63e6c4daca3b443b0cce37086e10cbe436494709.scope. Jul 2 07:42:59.834633 kubelet[1668]: W0702 07:42:59.834554 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.834633 kubelet[1668]: E0702 07:42:59.834611 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.834957 env[1189]: time="2024-07-02T07:42:59.834913634Z" level=info msg="StartContainer for \"c1a5603f5611b45c1739d4a3d30c6188568707b8a7621301765b3415ad98144d\" returns successfully" Jul 2 07:42:59.853554 env[1189]: time="2024-07-02T07:42:59.853490412Z" level=info msg="StartContainer for \"1ccc00f476afafb7bc889d17f10723747970ff80414dac098bc6db2691435d3b\" returns successfully" Jul 2 07:42:59.870254 env[1189]: time="2024-07-02T07:42:59.870212311Z" level=info msg="StartContainer for \"a60c44669e3e5ed91e31f02d63e6c4daca3b443b0cce37086e10cbe436494709\" returns successfully" Jul 2 07:42:59.892129 kubelet[1668]: W0702 07:42:59.892049 1668 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.892129 kubelet[1668]: E0702 07:42:59.892133 1668 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 2 07:42:59.911537 kubelet[1668]: E0702 07:42:59.911510 1668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Jul 2 07:43:00.014233 kubelet[1668]: I0702 07:43:00.014208 1668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:43:00.526666 kubelet[1668]: E0702 07:43:00.526634 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:00.528036 kubelet[1668]: E0702 07:43:00.528013 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:00.529239 kubelet[1668]: E0702 07:43:00.529217 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:00.825456 kubelet[1668]: I0702 07:43:00.825345 1668 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:43:00.830911 kubelet[1668]: E0702 07:43:00.830863 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:00.931525 kubelet[1668]: E0702 07:43:00.931474 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.032329 kubelet[1668]: E0702 07:43:01.032293 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.132724 kubelet[1668]: E0702 07:43:01.132602 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.233159 kubelet[1668]: E0702 07:43:01.233120 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.333667 kubelet[1668]: E0702 07:43:01.333625 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.434487 kubelet[1668]: E0702 07:43:01.434392 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.531636 kubelet[1668]: E0702 07:43:01.531601 1668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:01.535319 kubelet[1668]: E0702 07:43:01.535296 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.635999 kubelet[1668]: E0702 07:43:01.635956 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.736700 kubelet[1668]: E0702 07:43:01.736579 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.837380 kubelet[1668]: E0702 07:43:01.837325 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:01.938052 kubelet[1668]: E0702 07:43:01.937978 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:02.038885 kubelet[1668]: E0702 07:43:02.038780 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:02.139342 kubelet[1668]: E0702 07:43:02.139289 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:02.239781 kubelet[1668]: E0702 07:43:02.239736 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:02.340305 kubelet[1668]: E0702 07:43:02.340266 1668 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:02.489152 kubelet[1668]: I0702 07:43:02.489118 1668 apiserver.go:52] "Watching apiserver" Jul 2 07:43:02.510380 kubelet[1668]: I0702 07:43:02.510359 1668 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:43:02.606850 systemd[1]: Reloading. Jul 2 07:43:02.672296 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2024-07-02T07:43:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:43:02.672324 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2024-07-02T07:43:02Z" level=info msg="torcx already run" Jul 2 07:43:02.775295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:43:02.775309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:43:02.791834 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:43:02.890173 kubelet[1668]: I0702 07:43:02.890095 1668 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:43:02.890183 systemd[1]: Stopping kubelet.service... Jul 2 07:43:02.911491 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:43:02.911652 systemd[1]: Stopped kubelet.service. Jul 2 07:43:02.913248 systemd[1]: Starting kubelet.service... Jul 2 07:43:02.983913 systemd[1]: Started kubelet.service. Jul 2 07:43:03.022391 kubelet[2010]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:43:03.022391 kubelet[2010]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:43:03.022391 kubelet[2010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:43:03.022788 kubelet[2010]: I0702 07:43:03.022414 2010 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:43:03.026380 kubelet[2010]: I0702 07:43:03.026355 2010 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:43:03.026380 kubelet[2010]: I0702 07:43:03.026371 2010 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:43:03.026512 kubelet[2010]: I0702 07:43:03.026493 2010 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:43:03.027452 kubelet[2010]: I0702 07:43:03.027432 2010 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:43:03.028540 kubelet[2010]: I0702 07:43:03.028507 2010 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:43:03.035524 kubelet[2010]: I0702 07:43:03.035492 2010 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:43:03.035697 kubelet[2010]: I0702 07:43:03.035665 2010 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:43:03.035857 kubelet[2010]: I0702 07:43:03.035691 2010 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:43:03.035955 kubelet[2010]: I0702 07:43:03.035864 2010 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:43:03.035955 kubelet[2010]: I0702 07:43:03.035872 2010 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:43:03.035955 kubelet[2010]: I0702 07:43:03.035905 2010 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:43:03.036032 kubelet[2010]: I0702 07:43:03.035980 2010 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:43:03.036032 kubelet[2010]: I0702 07:43:03.035991 2010 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:43:03.036032 kubelet[2010]: I0702 07:43:03.036007 2010 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:43:03.036032 kubelet[2010]: I0702 07:43:03.036020 2010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:43:03.036785 kubelet[2010]: I0702 07:43:03.036768 2010 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:43:03.036933 kubelet[2010]: I0702 07:43:03.036890 2010 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:43:03.037246 kubelet[2010]: I0702 07:43:03.037215 2010 server.go:1264] "Started kubelet" Jul 2 07:43:03.037414 kubelet[2010]: I0702 07:43:03.037372 2010 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:43:03.037496 kubelet[2010]: I0702 07:43:03.037436 2010 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:43:03.037663 kubelet[2010]: I0702 07:43:03.037622 2010 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:43:03.038659 kubelet[2010]: I0702 07:43:03.038640 2010 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:43:03.039090 kubelet[2010]: I0702 07:43:03.039077 2010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:43:03.045553 kubelet[2010]: E0702 07:43:03.045513 2010 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:43:03.045553 kubelet[2010]: I0702 07:43:03.045558 2010 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:43:03.045705 kubelet[2010]: I0702 07:43:03.045623 2010 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:43:03.045747 kubelet[2010]: I0702 07:43:03.045729 2010 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:43:03.049251 kubelet[2010]: E0702 07:43:03.049182 2010 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:43:03.054607 kubelet[2010]: I0702 07:43:03.054577 2010 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:43:03.054607 kubelet[2010]: I0702 07:43:03.054595 2010 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:43:03.054745 kubelet[2010]: I0702 07:43:03.054671 2010 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:43:03.055971 kubelet[2010]: I0702 07:43:03.055951 2010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:43:03.056744 kubelet[2010]: I0702 07:43:03.056732 2010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:43:03.056821 kubelet[2010]: I0702 07:43:03.056807 2010 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:43:03.056907 kubelet[2010]: I0702 07:43:03.056893 2010 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:43:03.057005 kubelet[2010]: E0702 07:43:03.056988 2010 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:43:03.080023 kubelet[2010]: I0702 07:43:03.079997 2010 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:43:03.080023 kubelet[2010]: I0702 07:43:03.080015 2010 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:43:03.080259 kubelet[2010]: I0702 07:43:03.080038 2010 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:43:03.080259 kubelet[2010]: I0702 07:43:03.080203 2010 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:43:03.080259 kubelet[2010]: I0702 07:43:03.080215 2010 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:43:03.080259 kubelet[2010]: I0702 07:43:03.080245 2010 policy_none.go:49] "None policy: Start" Jul 2 07:43:03.080758 kubelet[2010]: I0702 07:43:03.080735 2010 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:43:03.080758 kubelet[2010]: I0702 07:43:03.080758 2010 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:43:03.080884 kubelet[2010]: I0702 07:43:03.080871 2010 state_mem.go:75] "Updated machine memory state" Jul 2 07:43:03.084367 kubelet[2010]: I0702 07:43:03.084341 2010 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:43:03.084522 kubelet[2010]: I0702 07:43:03.084489 2010 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:43:03.084582 kubelet[2010]: I0702 07:43:03.084569 2010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:43:03.148361 kubelet[2010]: I0702 07:43:03.148278 2010 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:43:03.153783 kubelet[2010]: I0702 07:43:03.153753 2010 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 07:43:03.154007 kubelet[2010]: I0702 07:43:03.153820 2010 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:43:03.157791 kubelet[2010]: I0702 07:43:03.157734 2010 topology_manager.go:215] "Topology Admit Handler" podUID="0378920adcd1fffd4d8772ace29d2c08" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:43:03.157907 kubelet[2010]: I0702 07:43:03.157818 2010 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:43:03.157907 kubelet[2010]: I0702 07:43:03.157860 2010 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:43:03.347138 kubelet[2010]: I0702 07:43:03.347095 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:43:03.347138 kubelet[2010]: I0702 07:43:03.347137 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:43:03.347323 kubelet[2010]: I0702 07:43:03.347158 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:43:03.347323 kubelet[2010]: I0702 07:43:03.347171 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:43:03.347323 kubelet[2010]: I0702 07:43:03.347187 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0378920adcd1fffd4d8772ace29d2c08-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0378920adcd1fffd4d8772ace29d2c08\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:43:03.347323 kubelet[2010]: I0702 07:43:03.347205 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:43:03.347323 kubelet[2010]: I0702 07:43:03.347245 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:43:03.347439 kubelet[2010]: I0702 07:43:03.347270 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:43:03.347439 kubelet[2010]: I0702 07:43:03.347292 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:43:03.465064 kubelet[2010]: E0702 07:43:03.464969 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:03.465774 kubelet[2010]: E0702 07:43:03.465743 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:03.465774 kubelet[2010]: E0702 07:43:03.465783 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:03.601772 sudo[2044]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:43:03.601961 sudo[2044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:43:04.036606 kubelet[2010]: I0702 07:43:04.036553 2010 apiserver.go:52] "Watching apiserver" Jul 2 07:43:04.045870 kubelet[2010]: I0702 07:43:04.045847 2010 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:43:04.046611 sudo[2044]: pam_unix(sudo:session): session closed for user root Jul 2 07:43:04.066699 kubelet[2010]: E0702 07:43:04.066667 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:04.067449 kubelet[2010]: E0702 07:43:04.067424 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:04.067856 kubelet[2010]: E0702 07:43:04.067840 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:04.083496 kubelet[2010]: I0702 07:43:04.083443 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.083428256 podStartE2EDuration="1.083428256s" podCreationTimestamp="2024-07-02 07:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:04.082745368 +0000 UTC m=+1.095843289" watchObservedRunningTime="2024-07-02 07:43:04.083428256 +0000 UTC m=+1.096526157" Jul 2 07:43:04.094004 kubelet[2010]: I0702 07:43:04.093935 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.093914836 podStartE2EDuration="1.093914836s" podCreationTimestamp="2024-07-02 07:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:04.087926153 +0000 UTC m=+1.101024074" watchObservedRunningTime="2024-07-02 07:43:04.093914836 +0000 UTC m=+1.107012737" Jul 2 07:43:05.068217 kubelet[2010]: E0702 07:43:05.068181 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:05.068550 kubelet[2010]: E0702 07:43:05.068375 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:05.550751 sudo[1304]: pam_unix(sudo:session): session closed for user root Jul 2 07:43:05.551917 sshd[1301]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:05.554537 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:41970.service: Deactivated successfully. Jul 2 07:43:05.555275 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:43:05.555445 systemd[1]: session-7.scope: Consumed 4.209s CPU time. Jul 2 07:43:05.555944 systemd-logind[1178]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:43:05.556780 systemd-logind[1178]: Removed session 7. Jul 2 07:43:06.070415 kubelet[2010]: E0702 07:43:06.070365 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:10.616430 kubelet[2010]: E0702 07:43:10.616401 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:10.626575 kubelet[2010]: I0702 07:43:10.626505 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.626489432 podStartE2EDuration="7.626489432s" podCreationTimestamp="2024-07-02 07:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:04.094344168 +0000 UTC m=+1.107442069" watchObservedRunningTime="2024-07-02 07:43:10.626489432 +0000 UTC m=+7.639587333" Jul 2 07:43:11.076468 kubelet[2010]: E0702 07:43:11.076421 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:12.532909 update_engine[1181]: I0702 07:43:12.532852 1181 update_attempter.cc:509] Updating boot flags... Jul 2 07:43:14.860988 kubelet[2010]: E0702 07:43:14.860952 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:15.300995 kubelet[2010]: E0702 07:43:15.298583 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:16.082671 kubelet[2010]: E0702 07:43:16.082638 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.264290 kubelet[2010]: I0702 07:43:17.264244 2010 topology_manager.go:215] "Topology Admit Handler" podUID="598b6e79-5148-46f5-adbf-de90bc0bbb13" podNamespace="kube-system" podName="cilium-operator-599987898-dqngs" Jul 2 07:43:17.270414 systemd[1]: Created slice kubepods-besteffort-pod598b6e79_5148_46f5_adbf_de90bc0bbb13.slice. Jul 2 07:43:17.323359 kubelet[2010]: I0702 07:43:17.323307 2010 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:43:17.323779 env[1189]: time="2024-07-02T07:43:17.323731253Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:43:17.324019 kubelet[2010]: I0702 07:43:17.323942 2010 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:43:17.341140 kubelet[2010]: I0702 07:43:17.341114 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhlg9\" (UniqueName: \"kubernetes.io/projected/598b6e79-5148-46f5-adbf-de90bc0bbb13-kube-api-access-lhlg9\") pod \"cilium-operator-599987898-dqngs\" (UID: \"598b6e79-5148-46f5-adbf-de90bc0bbb13\") " pod="kube-system/cilium-operator-599987898-dqngs" Jul 2 07:43:17.341194 kubelet[2010]: I0702 07:43:17.341156 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/598b6e79-5148-46f5-adbf-de90bc0bbb13-cilium-config-path\") pod \"cilium-operator-599987898-dqngs\" (UID: \"598b6e79-5148-46f5-adbf-de90bc0bbb13\") " pod="kube-system/cilium-operator-599987898-dqngs" Jul 2 07:43:17.564729 kubelet[2010]: I0702 07:43:17.564682 2010 topology_manager.go:215] "Topology Admit Handler" podUID="517899d7-10a7-4495-8cf6-7a4219177b18" podNamespace="kube-system" podName="kube-proxy-4wc4g" Jul 2 07:43:17.570090 systemd[1]: Created slice kubepods-besteffort-pod517899d7_10a7_4495_8cf6_7a4219177b18.slice. Jul 2 07:43:17.578471 kubelet[2010]: E0702 07:43:17.578434 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.579937 env[1189]: time="2024-07-02T07:43:17.579490295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dqngs,Uid:598b6e79-5148-46f5-adbf-de90bc0bbb13,Namespace:kube-system,Attempt:0,}" Jul 2 07:43:17.583241 kubelet[2010]: I0702 07:43:17.583209 2010 topology_manager.go:215] "Topology Admit Handler" podUID="37233059-7a27-4025-bb3b-f16acabb118b" podNamespace="kube-system" podName="cilium-q4ntb" Jul 2 07:43:17.600787 systemd[1]: Created slice kubepods-burstable-pod37233059_7a27_4025_bb3b_f16acabb118b.slice. Jul 2 07:43:17.604498 env[1189]: time="2024-07-02T07:43:17.604431143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:43:17.604498 env[1189]: time="2024-07-02T07:43:17.604462602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:43:17.604498 env[1189]: time="2024-07-02T07:43:17.604473813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:43:17.604718 env[1189]: time="2024-07-02T07:43:17.604576568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7 pid=2113 runtime=io.containerd.runc.v2 Jul 2 07:43:17.620401 systemd[1]: run-containerd-runc-k8s.io-17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7-runc.ASNxS7.mount: Deactivated successfully. Jul 2 07:43:17.623169 systemd[1]: Started cri-containerd-17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7.scope. Jul 2 07:43:17.643275 kubelet[2010]: I0702 07:43:17.643228 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-cgroup\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643275 kubelet[2010]: I0702 07:43:17.643272 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-etc-cni-netd\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643440 kubelet[2010]: I0702 07:43:17.643285 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/517899d7-10a7-4495-8cf6-7a4219177b18-lib-modules\") pod \"kube-proxy-4wc4g\" (UID: \"517899d7-10a7-4495-8cf6-7a4219177b18\") " pod="kube-system/kube-proxy-4wc4g" Jul 2 07:43:17.643440 kubelet[2010]: I0702 07:43:17.643298 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-xtables-lock\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643440 kubelet[2010]: I0702 07:43:17.643310 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-kernel\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643440 kubelet[2010]: I0702 07:43:17.643322 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/517899d7-10a7-4495-8cf6-7a4219177b18-xtables-lock\") pod \"kube-proxy-4wc4g\" (UID: \"517899d7-10a7-4495-8cf6-7a4219177b18\") " pod="kube-system/kube-proxy-4wc4g" Jul 2 07:43:17.643440 kubelet[2010]: I0702 07:43:17.643336 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hr6q\" (UniqueName: \"kubernetes.io/projected/517899d7-10a7-4495-8cf6-7a4219177b18-kube-api-access-5hr6q\") pod \"kube-proxy-4wc4g\" (UID: \"517899d7-10a7-4495-8cf6-7a4219177b18\") " pod="kube-system/kube-proxy-4wc4g" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643350 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-hostproc\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643361 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-net\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643373 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-bpf-maps\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643386 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cni-path\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643398 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37233059-7a27-4025-bb3b-f16acabb118b-clustermesh-secrets\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643557 kubelet[2010]: I0702 07:43:17.643418 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6wb\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-kube-api-access-pn6wb\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643689 kubelet[2010]: I0702 07:43:17.643430 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/517899d7-10a7-4495-8cf6-7a4219177b18-kube-proxy\") pod \"kube-proxy-4wc4g\" (UID: \"517899d7-10a7-4495-8cf6-7a4219177b18\") " pod="kube-system/kube-proxy-4wc4g" Jul 2 07:43:17.643689 kubelet[2010]: I0702 07:43:17.643449 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-hubble-tls\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643689 kubelet[2010]: I0702 07:43:17.643468 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37233059-7a27-4025-bb3b-f16acabb118b-cilium-config-path\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643689 kubelet[2010]: I0702 07:43:17.643485 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-lib-modules\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.643689 kubelet[2010]: I0702 07:43:17.643505 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-run\") pod \"cilium-q4ntb\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " pod="kube-system/cilium-q4ntb" Jul 2 07:43:17.659577 env[1189]: time="2024-07-02T07:43:17.659521749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dqngs,Uid:598b6e79-5148-46f5-adbf-de90bc0bbb13,Namespace:kube-system,Attempt:0,} returns sandbox id \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\"" Jul 2 07:43:17.660755 kubelet[2010]: E0702 07:43:17.660729 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.661933 env[1189]: time="2024-07-02T07:43:17.661911482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:43:17.873686 kubelet[2010]: E0702 07:43:17.872491 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.873995 env[1189]: time="2024-07-02T07:43:17.873942536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wc4g,Uid:517899d7-10a7-4495-8cf6-7a4219177b18,Namespace:kube-system,Attempt:0,}" Jul 2 07:43:17.889428 env[1189]: time="2024-07-02T07:43:17.889344428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:43:17.889428 env[1189]: time="2024-07-02T07:43:17.889391367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:43:17.889428 env[1189]: time="2024-07-02T07:43:17.889405163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:43:17.889618 env[1189]: time="2024-07-02T07:43:17.889565386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c3ab1f95cd80e0ff82fe902110eedf1b077a7ccaece9e5550848fa9ec48c6b6 pid=2159 runtime=io.containerd.runc.v2 Jul 2 07:43:17.898960 systemd[1]: Started cri-containerd-5c3ab1f95cd80e0ff82fe902110eedf1b077a7ccaece9e5550848fa9ec48c6b6.scope. Jul 2 07:43:17.905596 kubelet[2010]: E0702 07:43:17.904487 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.905693 env[1189]: time="2024-07-02T07:43:17.904858524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4ntb,Uid:37233059-7a27-4025-bb3b-f16acabb118b,Namespace:kube-system,Attempt:0,}" Jul 2 07:43:17.920201 env[1189]: time="2024-07-02T07:43:17.920151691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wc4g,Uid:517899d7-10a7-4495-8cf6-7a4219177b18,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c3ab1f95cd80e0ff82fe902110eedf1b077a7ccaece9e5550848fa9ec48c6b6\"" Jul 2 07:43:17.920949 kubelet[2010]: E0702 07:43:17.920911 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.922213 env[1189]: time="2024-07-02T07:43:17.921352223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:43:17.922213 env[1189]: time="2024-07-02T07:43:17.921387290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:43:17.922213 env[1189]: time="2024-07-02T07:43:17.921396477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:43:17.922684 env[1189]: time="2024-07-02T07:43:17.922574356Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a pid=2200 runtime=io.containerd.runc.v2 Jul 2 07:43:17.923984 env[1189]: time="2024-07-02T07:43:17.923945321Z" level=info msg="CreateContainer within sandbox \"5c3ab1f95cd80e0ff82fe902110eedf1b077a7ccaece9e5550848fa9ec48c6b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:43:17.935061 systemd[1]: Started cri-containerd-81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a.scope. Jul 2 07:43:17.943163 env[1189]: time="2024-07-02T07:43:17.941505729Z" level=info msg="CreateContainer within sandbox \"5c3ab1f95cd80e0ff82fe902110eedf1b077a7ccaece9e5550848fa9ec48c6b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9c65338fff4d317f9a19fb5516566702935bbd128421303911e9e6730671d2a\"" Jul 2 07:43:17.943163 env[1189]: time="2024-07-02T07:43:17.943123010Z" level=info msg="StartContainer for \"c9c65338fff4d317f9a19fb5516566702935bbd128421303911e9e6730671d2a\"" Jul 2 07:43:17.958742 env[1189]: time="2024-07-02T07:43:17.958696288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4ntb,Uid:37233059-7a27-4025-bb3b-f16acabb118b,Namespace:kube-system,Attempt:0,} returns sandbox id \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\"" Jul 2 07:43:17.959705 kubelet[2010]: E0702 07:43:17.959683 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:17.959914 systemd[1]: Started cri-containerd-c9c65338fff4d317f9a19fb5516566702935bbd128421303911e9e6730671d2a.scope. Jul 2 07:43:17.988037 env[1189]: time="2024-07-02T07:43:17.987984966Z" level=info msg="StartContainer for \"c9c65338fff4d317f9a19fb5516566702935bbd128421303911e9e6730671d2a\" returns successfully" Jul 2 07:43:18.086849 kubelet[2010]: E0702 07:43:18.086818 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:18.093058 kubelet[2010]: I0702 07:43:18.093003 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4wc4g" podStartSLOduration=1.092987164 podStartE2EDuration="1.092987164s" podCreationTimestamp="2024-07-02 07:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:18.092675664 +0000 UTC m=+15.105773595" watchObservedRunningTime="2024-07-02 07:43:18.092987164 +0000 UTC m=+15.106085085" Jul 2 07:43:18.825062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098741929.mount: Deactivated successfully. Jul 2 07:43:19.407901 env[1189]: time="2024-07-02T07:43:19.407845734Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:19.409469 env[1189]: time="2024-07-02T07:43:19.409436311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:19.410899 env[1189]: time="2024-07-02T07:43:19.410858500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:19.411348 env[1189]: time="2024-07-02T07:43:19.411317748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:43:19.412286 env[1189]: time="2024-07-02T07:43:19.412264278Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:43:19.413296 env[1189]: time="2024-07-02T07:43:19.413247267Z" level=info msg="CreateContainer within sandbox \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:43:19.428146 env[1189]: time="2024-07-02T07:43:19.428099137Z" level=info msg="CreateContainer within sandbox \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\"" Jul 2 07:43:19.428598 env[1189]: time="2024-07-02T07:43:19.428562262Z" level=info msg="StartContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\"" Jul 2 07:43:19.445854 systemd[1]: Started cri-containerd-766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2.scope. Jul 2 07:43:19.474055 env[1189]: time="2024-07-02T07:43:19.473987177Z" level=info msg="StartContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" returns successfully" Jul 2 07:43:20.093616 kubelet[2010]: E0702 07:43:20.093585 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:21.095453 kubelet[2010]: E0702 07:43:21.095416 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:23.229960 kubelet[2010]: I0702 07:43:23.229910 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dqngs" podStartSLOduration=4.479418055 podStartE2EDuration="6.229893503s" podCreationTimestamp="2024-07-02 07:43:17 +0000 UTC" firstStartedPulling="2024-07-02 07:43:17.66158267 +0000 UTC m=+14.674680571" lastFinishedPulling="2024-07-02 07:43:19.412058118 +0000 UTC m=+16.425156019" observedRunningTime="2024-07-02 07:43:20.104896893 +0000 UTC m=+17.117994794" watchObservedRunningTime="2024-07-02 07:43:23.229893503 +0000 UTC m=+20.242991404" Jul 2 07:43:25.861937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231328752.mount: Deactivated successfully. Jul 2 07:43:29.677854 env[1189]: time="2024-07-02T07:43:29.677811172Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:29.680110 env[1189]: time="2024-07-02T07:43:29.680087569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:29.681798 env[1189]: time="2024-07-02T07:43:29.681775369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:43:29.682101 env[1189]: time="2024-07-02T07:43:29.682059032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:43:29.685094 env[1189]: time="2024-07-02T07:43:29.685032603Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:43:29.697159 env[1189]: time="2024-07-02T07:43:29.697118185Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\"" Jul 2 07:43:29.698467 env[1189]: time="2024-07-02T07:43:29.698448180Z" level=info msg="StartContainer for \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\"" Jul 2 07:43:29.727761 systemd[1]: run-containerd-runc-k8s.io-53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663-runc.IMmI17.mount: Deactivated successfully. Jul 2 07:43:29.729039 systemd[1]: Started cri-containerd-53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663.scope. Jul 2 07:43:29.750780 env[1189]: time="2024-07-02T07:43:29.750743693Z" level=info msg="StartContainer for \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\" returns successfully" Jul 2 07:43:29.757482 systemd[1]: cri-containerd-53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663.scope: Deactivated successfully. Jul 2 07:43:30.106126 env[1189]: time="2024-07-02T07:43:30.106085762Z" level=info msg="shim disconnected" id=53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663 Jul 2 07:43:30.106362 env[1189]: time="2024-07-02T07:43:30.106328559Z" level=warning msg="cleaning up after shim disconnected" id=53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663 namespace=k8s.io Jul 2 07:43:30.106461 env[1189]: time="2024-07-02T07:43:30.106432415Z" level=info msg="cleaning up dead shim" Jul 2 07:43:30.112944 env[1189]: time="2024-07-02T07:43:30.112571033Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2484 runtime=io.containerd.runc.v2\n" Jul 2 07:43:30.113132 kubelet[2010]: E0702 07:43:30.113109 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:30.342251 systemd[1]: Started sshd@7-10.0.0.17:22-10.0.0.1:46784.service. Jul 2 07:43:30.383985 sshd[2497]: Accepted publickey for core from 10.0.0.1 port 46784 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:30.385407 sshd[2497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:30.388624 systemd-logind[1178]: New session 8 of user core. Jul 2 07:43:30.389414 systemd[1]: Started session-8.scope. Jul 2 07:43:30.491232 sshd[2497]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:30.493222 systemd[1]: sshd@7-10.0.0.17:22-10.0.0.1:46784.service: Deactivated successfully. Jul 2 07:43:30.493870 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:43:30.494305 systemd-logind[1178]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:43:30.495016 systemd-logind[1178]: Removed session 8. Jul 2 07:43:30.693719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663-rootfs.mount: Deactivated successfully. Jul 2 07:43:31.114593 kubelet[2010]: E0702 07:43:31.114541 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:31.116627 env[1189]: time="2024-07-02T07:43:31.116544246Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:43:31.132090 env[1189]: time="2024-07-02T07:43:31.132006954Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\"" Jul 2 07:43:31.132645 env[1189]: time="2024-07-02T07:43:31.132611371Z" level=info msg="StartContainer for \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\"" Jul 2 07:43:31.149377 systemd[1]: Started cri-containerd-443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5.scope. Jul 2 07:43:31.170131 env[1189]: time="2024-07-02T07:43:31.170064614Z" level=info msg="StartContainer for \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\" returns successfully" Jul 2 07:43:31.179336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:43:31.179526 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:43:31.179672 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:43:31.180939 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:43:31.182930 systemd[1]: cri-containerd-443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5.scope: Deactivated successfully. Jul 2 07:43:31.190335 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:43:31.209399 env[1189]: time="2024-07-02T07:43:31.209339135Z" level=info msg="shim disconnected" id=443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5 Jul 2 07:43:31.209399 env[1189]: time="2024-07-02T07:43:31.209395241Z" level=warning msg="cleaning up after shim disconnected" id=443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5 namespace=k8s.io Jul 2 07:43:31.209399 env[1189]: time="2024-07-02T07:43:31.209404028Z" level=info msg="cleaning up dead shim" Jul 2 07:43:31.215608 env[1189]: time="2024-07-02T07:43:31.215569774Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:43:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2562 runtime=io.containerd.runc.v2\n" Jul 2 07:43:31.693409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5-rootfs.mount: Deactivated successfully. Jul 2 07:43:32.118579 kubelet[2010]: E0702 07:43:32.118518 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:32.121577 env[1189]: time="2024-07-02T07:43:32.121530695Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:43:32.138008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415056155.mount: Deactivated successfully. Jul 2 07:43:32.141082 env[1189]: time="2024-07-02T07:43:32.141029821Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\"" Jul 2 07:43:32.141517 env[1189]: time="2024-07-02T07:43:32.141487112Z" level=info msg="StartContainer for \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\"" Jul 2 07:43:32.156273 systemd[1]: Started cri-containerd-526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a.scope. Jul 2 07:43:32.180266 systemd[1]: cri-containerd-526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a.scope: Deactivated successfully. Jul 2 07:43:32.181206 env[1189]: time="2024-07-02T07:43:32.181156879Z" level=info msg="StartContainer for \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\" returns successfully" Jul 2 07:43:32.200896 env[1189]: time="2024-07-02T07:43:32.200846205Z" level=info msg="shim disconnected" id=526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a Jul 2 07:43:32.200896 env[1189]: time="2024-07-02T07:43:32.200888955Z" level=warning msg="cleaning up after shim disconnected" id=526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a namespace=k8s.io Jul 2 07:43:32.200896 env[1189]: time="2024-07-02T07:43:32.200896679Z" level=info msg="cleaning up dead shim" Jul 2 07:43:32.207010 env[1189]: time="2024-07-02T07:43:32.206956244Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:43:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2618 runtime=io.containerd.runc.v2\n" Jul 2 07:43:32.693357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a-rootfs.mount: Deactivated successfully. Jul 2 07:43:33.121738 kubelet[2010]: E0702 07:43:33.121710 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:33.126628 env[1189]: time="2024-07-02T07:43:33.126567375Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:43:33.140388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979512872.mount: Deactivated successfully. Jul 2 07:43:33.141576 env[1189]: time="2024-07-02T07:43:33.141518286Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\"" Jul 2 07:43:33.145783 env[1189]: time="2024-07-02T07:43:33.145747223Z" level=info msg="StartContainer for \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\"" Jul 2 07:43:33.163790 systemd[1]: Started cri-containerd-96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c.scope. Jul 2 07:43:33.184555 systemd[1]: cri-containerd-96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c.scope: Deactivated successfully. Jul 2 07:43:33.185771 env[1189]: time="2024-07-02T07:43:33.185719113Z" level=info msg="StartContainer for \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\" returns successfully" Jul 2 07:43:33.204444 env[1189]: time="2024-07-02T07:43:33.204383481Z" level=info msg="shim disconnected" id=96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c Jul 2 07:43:33.204444 env[1189]: time="2024-07-02T07:43:33.204433835Z" level=warning msg="cleaning up after shim disconnected" id=96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c namespace=k8s.io Jul 2 07:43:33.204444 env[1189]: time="2024-07-02T07:43:33.204442703Z" level=info msg="cleaning up dead shim" Jul 2 07:43:33.212485 env[1189]: time="2024-07-02T07:43:33.212415654Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:43:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2674 runtime=io.containerd.runc.v2\n" Jul 2 07:43:33.693466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c-rootfs.mount: Deactivated successfully. Jul 2 07:43:34.125156 kubelet[2010]: E0702 07:43:34.125123 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:34.126898 env[1189]: time="2024-07-02T07:43:34.126857211Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:43:34.141932 env[1189]: time="2024-07-02T07:43:34.141879030Z" level=info msg="CreateContainer within sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\"" Jul 2 07:43:34.142492 env[1189]: time="2024-07-02T07:43:34.142425498Z" level=info msg="StartContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\"" Jul 2 07:43:34.158907 systemd[1]: Started cri-containerd-b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e.scope. Jul 2 07:43:34.184876 env[1189]: time="2024-07-02T07:43:34.184810642Z" level=info msg="StartContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" returns successfully" Jul 2 07:43:34.256882 kubelet[2010]: I0702 07:43:34.256840 2010 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:43:34.443644 kubelet[2010]: I0702 07:43:34.443524 2010 topology_manager.go:215] "Topology Admit Handler" podUID="b8cbc8ce-02d0-4488-97a1-6179a051c2d4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h96mh" Jul 2 07:43:34.444377 kubelet[2010]: I0702 07:43:34.444335 2010 topology_manager.go:215] "Topology Admit Handler" podUID="d7c1c529-8661-4fe2-963e-475623086688" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m5cts" Jul 2 07:43:34.449364 systemd[1]: Created slice kubepods-burstable-podb8cbc8ce_02d0_4488_97a1_6179a051c2d4.slice. Jul 2 07:43:34.453105 systemd[1]: Created slice kubepods-burstable-podd7c1c529_8661_4fe2_963e_475623086688.slice. Jul 2 07:43:34.465757 kubelet[2010]: I0702 07:43:34.465712 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8cbc8ce-02d0-4488-97a1-6179a051c2d4-config-volume\") pod \"coredns-7db6d8ff4d-h96mh\" (UID: \"b8cbc8ce-02d0-4488-97a1-6179a051c2d4\") " pod="kube-system/coredns-7db6d8ff4d-h96mh" Jul 2 07:43:34.465757 kubelet[2010]: I0702 07:43:34.465750 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7c1c529-8661-4fe2-963e-475623086688-config-volume\") pod \"coredns-7db6d8ff4d-m5cts\" (UID: \"d7c1c529-8661-4fe2-963e-475623086688\") " pod="kube-system/coredns-7db6d8ff4d-m5cts" Jul 2 07:43:34.465757 kubelet[2010]: I0702 07:43:34.465773 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9dj7\" (UniqueName: \"kubernetes.io/projected/d7c1c529-8661-4fe2-963e-475623086688-kube-api-access-r9dj7\") pod \"coredns-7db6d8ff4d-m5cts\" (UID: \"d7c1c529-8661-4fe2-963e-475623086688\") " pod="kube-system/coredns-7db6d8ff4d-m5cts" Jul 2 07:43:34.466027 kubelet[2010]: I0702 07:43:34.465797 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgpj\" (UniqueName: \"kubernetes.io/projected/b8cbc8ce-02d0-4488-97a1-6179a051c2d4-kube-api-access-rmgpj\") pod \"coredns-7db6d8ff4d-h96mh\" (UID: \"b8cbc8ce-02d0-4488-97a1-6179a051c2d4\") " pod="kube-system/coredns-7db6d8ff4d-h96mh" Jul 2 07:43:34.752451 kubelet[2010]: E0702 07:43:34.752353 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:34.754976 env[1189]: time="2024-07-02T07:43:34.754891938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h96mh,Uid:b8cbc8ce-02d0-4488-97a1-6179a051c2d4,Namespace:kube-system,Attempt:0,}" Jul 2 07:43:34.756857 kubelet[2010]: E0702 07:43:34.756840 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:34.757366 env[1189]: time="2024-07-02T07:43:34.757330175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5cts,Uid:d7c1c529-8661-4fe2-963e-475623086688,Namespace:kube-system,Attempt:0,}" Jul 2 07:43:35.128542 kubelet[2010]: E0702 07:43:35.128507 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:35.138763 kubelet[2010]: I0702 07:43:35.138522 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q4ntb" podStartSLOduration=6.416416525 podStartE2EDuration="18.13850826s" podCreationTimestamp="2024-07-02 07:43:17 +0000 UTC" firstStartedPulling="2024-07-02 07:43:17.961414994 +0000 UTC m=+14.974512885" lastFinishedPulling="2024-07-02 07:43:29.683506719 +0000 UTC m=+26.696604620" observedRunningTime="2024-07-02 07:43:35.138246607 +0000 UTC m=+32.151344508" watchObservedRunningTime="2024-07-02 07:43:35.13850826 +0000 UTC m=+32.151606161" Jul 2 07:43:35.495085 systemd[1]: Started sshd@8-10.0.0.17:22-10.0.0.1:53354.service. Jul 2 07:43:35.535192 sshd[2851]: Accepted publickey for core from 10.0.0.1 port 53354 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:35.536253 sshd[2851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:35.539479 systemd-logind[1178]: New session 9 of user core. Jul 2 07:43:35.540272 systemd[1]: Started session-9.scope. Jul 2 07:43:35.638039 sshd[2851]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:35.639898 systemd[1]: sshd@8-10.0.0.17:22-10.0.0.1:53354.service: Deactivated successfully. Jul 2 07:43:35.640522 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:43:35.641048 systemd-logind[1178]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:43:35.641668 systemd-logind[1178]: Removed session 9. Jul 2 07:43:36.115828 systemd-networkd[1016]: cilium_host: Link UP Jul 2 07:43:36.115953 systemd-networkd[1016]: cilium_net: Link UP Jul 2 07:43:36.118425 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 07:43:36.118481 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:43:36.118553 systemd-networkd[1016]: cilium_net: Gained carrier Jul 2 07:43:36.118707 systemd-networkd[1016]: cilium_host: Gained carrier Jul 2 07:43:36.118805 systemd-networkd[1016]: cilium_net: Gained IPv6LL Jul 2 07:43:36.118945 systemd-networkd[1016]: cilium_host: Gained IPv6LL Jul 2 07:43:36.132821 kubelet[2010]: E0702 07:43:36.132797 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:36.196454 systemd-networkd[1016]: cilium_vxlan: Link UP Jul 2 07:43:36.196463 systemd-networkd[1016]: cilium_vxlan: Gained carrier Jul 2 07:43:36.376104 kernel: NET: Registered PF_ALG protocol family Jul 2 07:43:36.911419 systemd-networkd[1016]: lxc_health: Link UP Jul 2 07:43:36.922097 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:43:36.922144 systemd-networkd[1016]: lxc_health: Gained carrier Jul 2 07:43:37.133297 kubelet[2010]: E0702 07:43:37.133264 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:37.284284 systemd-networkd[1016]: cilium_vxlan: Gained IPv6LL Jul 2 07:43:37.294374 systemd-networkd[1016]: lxc7a96f6d501bf: Link UP Jul 2 07:43:37.301096 kernel: eth0: renamed from tmp4d1e2 Jul 2 07:43:37.314919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:43:37.315046 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7a96f6d501bf: link becomes ready Jul 2 07:43:37.315103 systemd-networkd[1016]: lxc7a96f6d501bf: Gained carrier Jul 2 07:43:37.316986 systemd-networkd[1016]: lxc4a8ea548dd41: Link UP Jul 2 07:43:37.326179 kernel: eth0: renamed from tmp08f36 Jul 2 07:43:37.334341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a8ea548dd41: link becomes ready Jul 2 07:43:37.334215 systemd-networkd[1016]: lxc4a8ea548dd41: Gained carrier Jul 2 07:43:38.134899 kubelet[2010]: E0702 07:43:38.134865 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:38.308442 systemd-networkd[1016]: lxc_health: Gained IPv6LL Jul 2 07:43:38.362178 systemd-networkd[1016]: lxc7a96f6d501bf: Gained IPv6LL Jul 2 07:43:38.746245 systemd-networkd[1016]: lxc4a8ea548dd41: Gained IPv6LL Jul 2 07:43:40.642162 systemd[1]: Started sshd@9-10.0.0.17:22-10.0.0.1:53358.service. Jul 2 07:43:40.708317 sshd[3249]: Accepted publickey for core from 10.0.0.1 port 53358 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:40.709702 sshd[3249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:40.714062 systemd[1]: Started session-10.scope. Jul 2 07:43:40.714385 systemd-logind[1178]: New session 10 of user core. Jul 2 07:43:40.718958 env[1189]: time="2024-07-02T07:43:40.718888216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:43:40.719243 env[1189]: time="2024-07-02T07:43:40.718962635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:43:40.719243 env[1189]: time="2024-07-02T07:43:40.718978936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:43:40.720099 env[1189]: time="2024-07-02T07:43:40.719902863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d1e259e32b5d6074a83d631b88ef82d61d7a821afb0a57422e2541ea81fe691 pid=3259 runtime=io.containerd.runc.v2 Jul 2 07:43:40.733538 systemd[1]: Started cri-containerd-4d1e259e32b5d6074a83d631b88ef82d61d7a821afb0a57422e2541ea81fe691.scope. Jul 2 07:43:40.743901 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:43:40.765318 env[1189]: time="2024-07-02T07:43:40.764946123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h96mh,Uid:b8cbc8ce-02d0-4488-97a1-6179a051c2d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1e259e32b5d6074a83d631b88ef82d61d7a821afb0a57422e2541ea81fe691\"" Jul 2 07:43:40.766448 kubelet[2010]: E0702 07:43:40.765552 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:40.768134 env[1189]: time="2024-07-02T07:43:40.768101904Z" level=info msg="CreateContainer within sandbox \"4d1e259e32b5d6074a83d631b88ef82d61d7a821afb0a57422e2541ea81fe691\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:43:40.774535 env[1189]: time="2024-07-02T07:43:40.774467967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:43:40.774592 env[1189]: time="2024-07-02T07:43:40.774540273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:43:40.774592 env[1189]: time="2024-07-02T07:43:40.774557115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:43:40.774876 env[1189]: time="2024-07-02T07:43:40.774789471Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08f3690660ae989f8d33810d59a75d1a4f65dcc6cc9a4899cc09761a7c937bfe pid=3308 runtime=io.containerd.runc.v2 Jul 2 07:43:40.786378 systemd[1]: Started cri-containerd-08f3690660ae989f8d33810d59a75d1a4f65dcc6cc9a4899cc09761a7c937bfe.scope. Jul 2 07:43:40.804915 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:43:40.826847 env[1189]: time="2024-07-02T07:43:40.826785298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5cts,Uid:d7c1c529-8661-4fe2-963e-475623086688,Namespace:kube-system,Attempt:0,} returns sandbox id \"08f3690660ae989f8d33810d59a75d1a4f65dcc6cc9a4899cc09761a7c937bfe\"" Jul 2 07:43:40.827576 kubelet[2010]: E0702 07:43:40.827545 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:40.829302 env[1189]: time="2024-07-02T07:43:40.829269466Z" level=info msg="CreateContainer within sandbox \"08f3690660ae989f8d33810d59a75d1a4f65dcc6cc9a4899cc09761a7c937bfe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:43:40.875014 sshd[3249]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:40.877816 systemd[1]: sshd@9-10.0.0.17:22-10.0.0.1:53358.service: Deactivated successfully. Jul 2 07:43:40.878550 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:43:40.879157 systemd-logind[1178]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:43:40.880035 systemd-logind[1178]: Removed session 10. Jul 2 07:43:41.329913 env[1189]: time="2024-07-02T07:43:41.329854851Z" level=info msg="CreateContainer within sandbox \"4d1e259e32b5d6074a83d631b88ef82d61d7a821afb0a57422e2541ea81fe691\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16b97d6cffcbb4775394f99ed086319884dac541eb770b5773329eadd6870d82\"" Jul 2 07:43:41.330326 env[1189]: time="2024-07-02T07:43:41.330300549Z" level=info msg="StartContainer for \"16b97d6cffcbb4775394f99ed086319884dac541eb770b5773329eadd6870d82\"" Jul 2 07:43:41.341902 systemd[1]: Started cri-containerd-16b97d6cffcbb4775394f99ed086319884dac541eb770b5773329eadd6870d82.scope. Jul 2 07:43:41.440493 env[1189]: time="2024-07-02T07:43:41.440430139Z" level=info msg="CreateContainer within sandbox \"08f3690660ae989f8d33810d59a75d1a4f65dcc6cc9a4899cc09761a7c937bfe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6178fc6c4652be2c8ba3f50d8c29a8f37da9eee96a0fb7ef72ca4a955348d5be\"" Jul 2 07:43:41.451549 env[1189]: time="2024-07-02T07:43:41.440878571Z" level=info msg="StartContainer for \"6178fc6c4652be2c8ba3f50d8c29a8f37da9eee96a0fb7ef72ca4a955348d5be\"" Jul 2 07:43:41.473451 systemd[1]: Started cri-containerd-6178fc6c4652be2c8ba3f50d8c29a8f37da9eee96a0fb7ef72ca4a955348d5be.scope. Jul 2 07:43:41.483169 env[1189]: time="2024-07-02T07:43:41.483125486Z" level=info msg="StartContainer for \"16b97d6cffcbb4775394f99ed086319884dac541eb770b5773329eadd6870d82\" returns successfully" Jul 2 07:43:41.542661 env[1189]: time="2024-07-02T07:43:41.542596536Z" level=info msg="StartContainer for \"6178fc6c4652be2c8ba3f50d8c29a8f37da9eee96a0fb7ef72ca4a955348d5be\" returns successfully" Jul 2 07:43:42.144159 kubelet[2010]: E0702 07:43:42.144081 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:42.145491 kubelet[2010]: E0702 07:43:42.145431 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:42.151971 kubelet[2010]: I0702 07:43:42.151898 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m5cts" podStartSLOduration=25.151881928 podStartE2EDuration="25.151881928s" podCreationTimestamp="2024-07-02 07:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:42.151448985 +0000 UTC m=+39.164546876" watchObservedRunningTime="2024-07-02 07:43:42.151881928 +0000 UTC m=+39.164979839" Jul 2 07:43:42.158154 kubelet[2010]: I0702 07:43:42.158100 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h96mh" podStartSLOduration=25.158080284 podStartE2EDuration="25.158080284s" podCreationTimestamp="2024-07-02 07:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:43:42.157506716 +0000 UTC m=+39.170604617" watchObservedRunningTime="2024-07-02 07:43:42.158080284 +0000 UTC m=+39.171178195" Jul 2 07:43:43.147235 kubelet[2010]: E0702 07:43:43.147201 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:43.147657 kubelet[2010]: E0702 07:43:43.147335 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:44.148992 kubelet[2010]: E0702 07:43:44.148964 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:44.149445 kubelet[2010]: E0702 07:43:44.149266 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:45.879224 systemd[1]: Started sshd@10-10.0.0.17:22-10.0.0.1:41154.service. Jul 2 07:43:45.919433 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 41154 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:45.920652 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:45.924003 systemd-logind[1178]: New session 11 of user core. Jul 2 07:43:45.924794 systemd[1]: Started session-11.scope. Jul 2 07:43:46.037913 sshd[3433]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:46.041566 systemd[1]: sshd@10-10.0.0.17:22-10.0.0.1:41154.service: Deactivated successfully. Jul 2 07:43:46.042322 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:43:46.043246 systemd-logind[1178]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:43:46.044421 systemd[1]: Started sshd@11-10.0.0.17:22-10.0.0.1:41156.service. Jul 2 07:43:46.045192 systemd-logind[1178]: Removed session 11. Jul 2 07:43:46.088992 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 41156 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:46.090515 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:46.093740 systemd-logind[1178]: New session 12 of user core. Jul 2 07:43:46.094445 systemd[1]: Started session-12.scope. Jul 2 07:43:46.247181 sshd[3448]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:46.249452 systemd[1]: sshd@11-10.0.0.17:22-10.0.0.1:41156.service: Deactivated successfully. Jul 2 07:43:46.249987 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:43:46.251358 systemd-logind[1178]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:43:46.252294 systemd[1]: Started sshd@12-10.0.0.17:22-10.0.0.1:41170.service. Jul 2 07:43:46.254026 systemd-logind[1178]: Removed session 12. Jul 2 07:43:46.295016 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 41170 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:46.296112 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:46.301119 systemd[1]: Started session-13.scope. Jul 2 07:43:46.302683 systemd-logind[1178]: New session 13 of user core. Jul 2 07:43:46.407511 sshd[3459]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:46.410019 systemd[1]: sshd@12-10.0.0.17:22-10.0.0.1:41170.service: Deactivated successfully. Jul 2 07:43:46.410702 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:43:46.411552 systemd-logind[1178]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:43:46.412352 systemd-logind[1178]: Removed session 13. Jul 2 07:43:47.510854 kubelet[2010]: I0702 07:43:47.510823 2010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:43:47.511637 kubelet[2010]: E0702 07:43:47.511621 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:48.156853 kubelet[2010]: E0702 07:43:48.156814 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:43:51.411495 systemd[1]: Started sshd@13-10.0.0.17:22-10.0.0.1:41176.service. Jul 2 07:43:51.450643 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 41176 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:51.451701 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:51.454932 systemd-logind[1178]: New session 14 of user core. Jul 2 07:43:51.455667 systemd[1]: Started session-14.scope. Jul 2 07:43:51.590083 sshd[3476]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:51.591972 systemd[1]: sshd@13-10.0.0.17:22-10.0.0.1:41176.service: Deactivated successfully. Jul 2 07:43:51.592731 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:43:51.593291 systemd-logind[1178]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:43:51.593934 systemd-logind[1178]: Removed session 14. Jul 2 07:43:56.594595 systemd[1]: Started sshd@14-10.0.0.17:22-10.0.0.1:41510.service. Jul 2 07:43:56.634199 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 41510 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:43:56.635213 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:43:56.638473 systemd-logind[1178]: New session 15 of user core. Jul 2 07:43:56.639411 systemd[1]: Started session-15.scope. Jul 2 07:43:56.742395 sshd[3490]: pam_unix(sshd:session): session closed for user core Jul 2 07:43:56.744562 systemd[1]: sshd@14-10.0.0.17:22-10.0.0.1:41510.service: Deactivated successfully. Jul 2 07:43:56.745324 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:43:56.745889 systemd-logind[1178]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:43:56.746559 systemd-logind[1178]: Removed session 15. Jul 2 07:44:01.746157 systemd[1]: Started sshd@15-10.0.0.17:22-10.0.0.1:41512.service. Jul 2 07:44:01.789587 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 41512 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:01.790745 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:01.793949 systemd-logind[1178]: New session 16 of user core. Jul 2 07:44:01.794907 systemd[1]: Started session-16.scope. Jul 2 07:44:01.902277 sshd[3504]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:01.904934 systemd[1]: sshd@15-10.0.0.17:22-10.0.0.1:41512.service: Deactivated successfully. Jul 2 07:44:01.905560 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:44:01.907113 systemd[1]: Started sshd@16-10.0.0.17:22-10.0.0.1:41520.service. Jul 2 07:44:01.908233 systemd-logind[1178]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:44:01.909221 systemd-logind[1178]: Removed session 16. Jul 2 07:44:01.946619 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 41520 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:01.947741 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:01.951200 systemd-logind[1178]: New session 17 of user core. Jul 2 07:44:01.952147 systemd[1]: Started session-17.scope. Jul 2 07:44:02.114158 sshd[3517]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:02.117009 systemd[1]: sshd@16-10.0.0.17:22-10.0.0.1:41520.service: Deactivated successfully. Jul 2 07:44:02.117559 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:44:02.118108 systemd-logind[1178]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:44:02.119192 systemd[1]: Started sshd@17-10.0.0.17:22-10.0.0.1:41536.service. Jul 2 07:44:02.120630 systemd-logind[1178]: Removed session 17. Jul 2 07:44:02.160590 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 41536 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:02.161667 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:02.165371 systemd-logind[1178]: New session 18 of user core. Jul 2 07:44:02.166344 systemd[1]: Started session-18.scope. Jul 2 07:44:03.531626 sshd[3528]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:03.534151 systemd[1]: Started sshd@18-10.0.0.17:22-10.0.0.1:43782.service. Jul 2 07:44:03.535333 systemd[1]: sshd@17-10.0.0.17:22-10.0.0.1:41536.service: Deactivated successfully. Jul 2 07:44:03.535843 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:44:03.536452 systemd-logind[1178]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:44:03.537415 systemd-logind[1178]: Removed session 18. Jul 2 07:44:03.577565 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 43782 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:03.578612 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:03.581593 systemd-logind[1178]: New session 19 of user core. Jul 2 07:44:03.582418 systemd[1]: Started session-19.scope. Jul 2 07:44:03.781011 sshd[3550]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:03.785596 systemd[1]: Started sshd@19-10.0.0.17:22-10.0.0.1:43788.service. Jul 2 07:44:03.786046 systemd[1]: sshd@18-10.0.0.17:22-10.0.0.1:43782.service: Deactivated successfully. Jul 2 07:44:03.786638 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:44:03.787968 systemd-logind[1178]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:44:03.788808 systemd-logind[1178]: Removed session 19. Jul 2 07:44:03.824559 sshd[3561]: Accepted publickey for core from 10.0.0.1 port 43788 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:03.825882 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:03.829400 systemd-logind[1178]: New session 20 of user core. Jul 2 07:44:03.830459 systemd[1]: Started session-20.scope. Jul 2 07:44:03.929909 sshd[3561]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:03.932180 systemd[1]: sshd@19-10.0.0.17:22-10.0.0.1:43788.service: Deactivated successfully. Jul 2 07:44:03.933006 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:44:03.933547 systemd-logind[1178]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:44:03.934181 systemd-logind[1178]: Removed session 20. Jul 2 07:44:08.934480 systemd[1]: Started sshd@20-10.0.0.17:22-10.0.0.1:43794.service. Jul 2 07:44:08.972890 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:08.973826 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:08.977117 systemd-logind[1178]: New session 21 of user core. Jul 2 07:44:08.977881 systemd[1]: Started session-21.scope. Jul 2 07:44:09.076308 sshd[3575]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:09.078313 systemd[1]: sshd@20-10.0.0.17:22-10.0.0.1:43794.service: Deactivated successfully. Jul 2 07:44:09.078998 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:44:09.079497 systemd-logind[1178]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:44:09.080116 systemd-logind[1178]: Removed session 21. Jul 2 07:44:14.080206 systemd[1]: Started sshd@21-10.0.0.17:22-10.0.0.1:55126.service. Jul 2 07:44:14.119002 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 55126 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:14.120201 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:14.123488 systemd-logind[1178]: New session 22 of user core. Jul 2 07:44:14.124217 systemd[1]: Started session-22.scope. Jul 2 07:44:14.227875 sshd[3592]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:14.230136 systemd[1]: sshd@21-10.0.0.17:22-10.0.0.1:55126.service: Deactivated successfully. Jul 2 07:44:14.230879 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:44:14.231636 systemd-logind[1178]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:44:14.232369 systemd-logind[1178]: Removed session 22. Jul 2 07:44:15.058601 kubelet[2010]: E0702 07:44:15.058554 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:19.232841 systemd[1]: Started sshd@22-10.0.0.17:22-10.0.0.1:55138.service. Jul 2 07:44:19.270951 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:19.271750 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:19.274651 systemd-logind[1178]: New session 23 of user core. Jul 2 07:44:19.275385 systemd[1]: Started session-23.scope. Jul 2 07:44:19.368940 sshd[3608]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:19.371348 systemd[1]: sshd@22-10.0.0.17:22-10.0.0.1:55138.service: Deactivated successfully. Jul 2 07:44:19.371973 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:44:19.372422 systemd-logind[1178]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:44:19.373108 systemd-logind[1178]: Removed session 23. Jul 2 07:44:24.373655 systemd[1]: Started sshd@23-10.0.0.17:22-10.0.0.1:47522.service. Jul 2 07:44:24.414518 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 47522 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:24.415593 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:24.418964 systemd-logind[1178]: New session 24 of user core. Jul 2 07:44:24.419738 systemd[1]: Started session-24.scope. Jul 2 07:44:24.517013 sshd[3621]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:24.519473 systemd[1]: sshd@23-10.0.0.17:22-10.0.0.1:47522.service: Deactivated successfully. Jul 2 07:44:24.520045 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:44:24.520795 systemd-logind[1178]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:44:24.521484 systemd[1]: Started sshd@24-10.0.0.17:22-10.0.0.1:47536.service. Jul 2 07:44:24.522339 systemd-logind[1178]: Removed session 24. Jul 2 07:44:24.561395 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:24.562413 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:24.565707 systemd-logind[1178]: New session 25 of user core. Jul 2 07:44:24.566550 systemd[1]: Started session-25.scope. Jul 2 07:44:25.058904 kubelet[2010]: E0702 07:44:25.058856 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:25.891729 env[1189]: time="2024-07-02T07:44:25.891647682Z" level=info msg="StopContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" with timeout 30 (s)" Jul 2 07:44:25.892143 env[1189]: time="2024-07-02T07:44:25.892108382Z" level=info msg="Stop container \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" with signal terminated" Jul 2 07:44:25.907264 systemd[1]: cri-containerd-766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2.scope: Deactivated successfully. Jul 2 07:44:25.922499 env[1189]: time="2024-07-02T07:44:25.922433655Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:44:25.928263 env[1189]: time="2024-07-02T07:44:25.928206354Z" level=info msg="StopContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" with timeout 2 (s)" Jul 2 07:44:25.928536 env[1189]: time="2024-07-02T07:44:25.928504042Z" level=info msg="Stop container \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" with signal terminated" Jul 2 07:44:25.936217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2-rootfs.mount: Deactivated successfully. Jul 2 07:44:25.937269 systemd-networkd[1016]: lxc_health: Link DOWN Jul 2 07:44:25.937278 systemd-networkd[1016]: lxc_health: Lost carrier Jul 2 07:44:25.972765 systemd[1]: cri-containerd-b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e.scope: Deactivated successfully. Jul 2 07:44:25.973039 systemd[1]: cri-containerd-b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e.scope: Consumed 6.201s CPU time. Jul 2 07:44:25.987519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e-rootfs.mount: Deactivated successfully. Jul 2 07:44:26.156739 env[1189]: time="2024-07-02T07:44:26.156578251Z" level=info msg="shim disconnected" id=766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2 Jul 2 07:44:26.156739 env[1189]: time="2024-07-02T07:44:26.156624079Z" level=warning msg="cleaning up after shim disconnected" id=766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2 namespace=k8s.io Jul 2 07:44:26.156739 env[1189]: time="2024-07-02T07:44:26.156632835Z" level=info msg="cleaning up dead shim" Jul 2 07:44:26.163497 env[1189]: time="2024-07-02T07:44:26.163453549Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" Jul 2 07:44:26.163864 env[1189]: time="2024-07-02T07:44:26.163810891Z" level=info msg="shim disconnected" id=b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e Jul 2 07:44:26.163864 env[1189]: time="2024-07-02T07:44:26.163863011Z" level=warning msg="cleaning up after shim disconnected" id=b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e namespace=k8s.io Jul 2 07:44:26.163864 env[1189]: time="2024-07-02T07:44:26.163872159Z" level=info msg="cleaning up dead shim" Jul 2 07:44:26.169844 env[1189]: time="2024-07-02T07:44:26.169789738Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3712 runtime=io.containerd.runc.v2\n" Jul 2 07:44:26.172520 env[1189]: time="2024-07-02T07:44:26.172474905Z" level=info msg="StopContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" returns successfully" Jul 2 07:44:26.173112 env[1189]: time="2024-07-02T07:44:26.173087265Z" level=info msg="StopPodSandbox for \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\"" Jul 2 07:44:26.173170 env[1189]: time="2024-07-02T07:44:26.173147089Z" level=info msg="Container to stop \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.174889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7-shm.mount: Deactivated successfully. Jul 2 07:44:26.175785 env[1189]: time="2024-07-02T07:44:26.175748066Z" level=info msg="StopContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" returns successfully" Jul 2 07:44:26.176157 env[1189]: time="2024-07-02T07:44:26.176138130Z" level=info msg="StopPodSandbox for \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\"" Jul 2 07:44:26.176273 env[1189]: time="2024-07-02T07:44:26.176243171Z" level=info msg="Container to stop \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.176273 env[1189]: time="2024-07-02T07:44:26.176266415Z" level=info msg="Container to stop \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.176273 env[1189]: time="2024-07-02T07:44:26.176276835Z" level=info msg="Container to stop \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.176447 env[1189]: time="2024-07-02T07:44:26.176287014Z" level=info msg="Container to stop \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.176447 env[1189]: time="2024-07-02T07:44:26.176297464Z" level=info msg="Container to stop \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:44:26.177852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a-shm.mount: Deactivated successfully. Jul 2 07:44:26.180133 systemd[1]: cri-containerd-17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7.scope: Deactivated successfully. Jul 2 07:44:26.184405 systemd[1]: cri-containerd-81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a.scope: Deactivated successfully. Jul 2 07:44:26.201616 env[1189]: time="2024-07-02T07:44:26.201552939Z" level=info msg="shim disconnected" id=17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7 Jul 2 07:44:26.201867 env[1189]: time="2024-07-02T07:44:26.201842282Z" level=warning msg="cleaning up after shim disconnected" id=17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7 namespace=k8s.io Jul 2 07:44:26.201961 env[1189]: time="2024-07-02T07:44:26.201936552Z" level=info msg="cleaning up dead shim" Jul 2 07:44:26.202110 env[1189]: time="2024-07-02T07:44:26.201935780Z" level=info msg="shim disconnected" id=81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a Jul 2 07:44:26.202184 env[1189]: time="2024-07-02T07:44:26.202116706Z" level=warning msg="cleaning up after shim disconnected" id=81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a namespace=k8s.io Jul 2 07:44:26.202184 env[1189]: time="2024-07-02T07:44:26.202130482Z" level=info msg="cleaning up dead shim" Jul 2 07:44:26.208862 env[1189]: time="2024-07-02T07:44:26.208810357Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3761 runtime=io.containerd.runc.v2\n" Jul 2 07:44:26.209250 env[1189]: time="2024-07-02T07:44:26.209210441Z" level=info msg="TearDown network for sandbox \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" successfully" Jul 2 07:44:26.209395 env[1189]: time="2024-07-02T07:44:26.209307536Z" level=info msg="StopPodSandbox for \"81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a\" returns successfully" Jul 2 07:44:26.209828 env[1189]: time="2024-07-02T07:44:26.209770831Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3760 runtime=io.containerd.runc.v2\n" Jul 2 07:44:26.210150 env[1189]: time="2024-07-02T07:44:26.210093668Z" level=info msg="TearDown network for sandbox \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\" successfully" Jul 2 07:44:26.210150 env[1189]: time="2024-07-02T07:44:26.210118245Z" level=info msg="StopPodSandbox for \"17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7\" returns successfully" Jul 2 07:44:26.217965 kubelet[2010]: I0702 07:44:26.217937 2010 scope.go:117] "RemoveContainer" containerID="b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e" Jul 2 07:44:26.219029 env[1189]: time="2024-07-02T07:44:26.218998362Z" level=info msg="RemoveContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\"" Jul 2 07:44:26.224670 env[1189]: time="2024-07-02T07:44:26.224572014Z" level=info msg="RemoveContainer for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" returns successfully" Jul 2 07:44:26.225035 kubelet[2010]: I0702 07:44:26.225013 2010 scope.go:117] "RemoveContainer" containerID="96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c" Jul 2 07:44:26.227827 env[1189]: time="2024-07-02T07:44:26.227785189Z" level=info msg="RemoveContainer for \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\"" Jul 2 07:44:26.230854 env[1189]: time="2024-07-02T07:44:26.230827960Z" level=info msg="RemoveContainer for \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\" returns successfully" Jul 2 07:44:26.231039 kubelet[2010]: I0702 07:44:26.231002 2010 scope.go:117] "RemoveContainer" containerID="526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a" Jul 2 07:44:26.231961 env[1189]: time="2024-07-02T07:44:26.231935745Z" level=info msg="RemoveContainer for \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\"" Jul 2 07:44:26.236192 env[1189]: time="2024-07-02T07:44:26.236156133Z" level=info msg="RemoveContainer for \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\" returns successfully" Jul 2 07:44:26.236334 kubelet[2010]: I0702 07:44:26.236314 2010 scope.go:117] "RemoveContainer" containerID="443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5" Jul 2 07:44:26.237271 env[1189]: time="2024-07-02T07:44:26.237248801Z" level=info msg="RemoveContainer for \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\"" Jul 2 07:44:26.239831 env[1189]: time="2024-07-02T07:44:26.239801254Z" level=info msg="RemoveContainer for \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\" returns successfully" Jul 2 07:44:26.239968 kubelet[2010]: I0702 07:44:26.239937 2010 scope.go:117] "RemoveContainer" containerID="53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663" Jul 2 07:44:26.240814 env[1189]: time="2024-07-02T07:44:26.240790493Z" level=info msg="RemoveContainer for \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\"" Jul 2 07:44:26.243738 env[1189]: time="2024-07-02T07:44:26.243707604Z" level=info msg="RemoveContainer for \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\" returns successfully" Jul 2 07:44:26.243865 kubelet[2010]: I0702 07:44:26.243843 2010 scope.go:117] "RemoveContainer" containerID="b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e" Jul 2 07:44:26.244090 env[1189]: time="2024-07-02T07:44:26.243996264Z" level=error msg="ContainerStatus for \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\": not found" Jul 2 07:44:26.244186 kubelet[2010]: E0702 07:44:26.244167 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\": not found" containerID="b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e" Jul 2 07:44:26.244269 kubelet[2010]: I0702 07:44:26.244191 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e"} err="failed to get container status \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5601119b22375b4710bee7086aae3de660c6f5584dde04cdf2b01a7dce8589e\": not found" Jul 2 07:44:26.244269 kubelet[2010]: I0702 07:44:26.244265 2010 scope.go:117] "RemoveContainer" containerID="96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c" Jul 2 07:44:26.244478 env[1189]: time="2024-07-02T07:44:26.244425584Z" level=error msg="ContainerStatus for \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\": not found" Jul 2 07:44:26.244572 kubelet[2010]: E0702 07:44:26.244548 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\": not found" containerID="96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c" Jul 2 07:44:26.244572 kubelet[2010]: I0702 07:44:26.244566 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c"} err="failed to get container status \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\": rpc error: code = NotFound desc = an error occurred when try to find container \"96984e013173c9c07cb8a4d9dd7c96aa40a71c2252ad3b793cd13156a48c982c\": not found" Jul 2 07:44:26.244664 kubelet[2010]: I0702 07:44:26.244577 2010 scope.go:117] "RemoveContainer" containerID="526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a" Jul 2 07:44:26.244795 env[1189]: time="2024-07-02T07:44:26.244749422Z" level=error msg="ContainerStatus for \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\": not found" Jul 2 07:44:26.244881 kubelet[2010]: E0702 07:44:26.244866 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\": not found" containerID="526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a" Jul 2 07:44:26.244922 kubelet[2010]: I0702 07:44:26.244882 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a"} err="failed to get container status \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\": rpc error: code = NotFound desc = an error occurred when try to find container \"526e7fdd73af84929276cdff493871d1912b0c3f36d95e407de5481f0840902a\": not found" Jul 2 07:44:26.244922 kubelet[2010]: I0702 07:44:26.244898 2010 scope.go:117] "RemoveContainer" containerID="443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5" Jul 2 07:44:26.245121 env[1189]: time="2024-07-02T07:44:26.245050607Z" level=error msg="ContainerStatus for \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\": not found" Jul 2 07:44:26.245198 kubelet[2010]: E0702 07:44:26.245188 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\": not found" containerID="443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5" Jul 2 07:44:26.245236 kubelet[2010]: I0702 07:44:26.245201 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5"} err="failed to get container status \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\": rpc error: code = NotFound desc = an error occurred when try to find container \"443466b9b5ea3db9bffed4319ed6598ef19f3d6338be96a42ce78eaa41fe9ec5\": not found" Jul 2 07:44:26.245236 kubelet[2010]: I0702 07:44:26.245211 2010 scope.go:117] "RemoveContainer" containerID="53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663" Jul 2 07:44:26.245442 env[1189]: time="2024-07-02T07:44:26.245385046Z" level=error msg="ContainerStatus for \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\": not found" Jul 2 07:44:26.245529 kubelet[2010]: E0702 07:44:26.245516 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\": not found" containerID="53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663" Jul 2 07:44:26.245569 kubelet[2010]: I0702 07:44:26.245529 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663"} err="failed to get container status \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\": rpc error: code = NotFound desc = an error occurred when try to find container \"53e747a5f3c096270523215e90b13ef428000a225ca53a77331e363896fa3663\": not found" Jul 2 07:44:26.245569 kubelet[2010]: I0702 07:44:26.245539 2010 scope.go:117] "RemoveContainer" containerID="766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2" Jul 2 07:44:26.246294 env[1189]: time="2024-07-02T07:44:26.246270487Z" level=info msg="RemoveContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\"" Jul 2 07:44:26.248763 env[1189]: time="2024-07-02T07:44:26.248742758Z" level=info msg="RemoveContainer for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" returns successfully" Jul 2 07:44:26.248886 kubelet[2010]: I0702 07:44:26.248870 2010 scope.go:117] "RemoveContainer" containerID="766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2" Jul 2 07:44:26.249113 env[1189]: time="2024-07-02T07:44:26.249039124Z" level=error msg="ContainerStatus for \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\": not found" Jul 2 07:44:26.249227 kubelet[2010]: E0702 07:44:26.249205 2010 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\": not found" containerID="766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2" Jul 2 07:44:26.249284 kubelet[2010]: I0702 07:44:26.249230 2010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2"} err="failed to get container status \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"766c2f04435596aa60738ef1d3ee60fa5a78a8d50dc1342c6937436d6222f2a2\": not found" Jul 2 07:44:26.263438 kubelet[2010]: I0702 07:44:26.263412 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37233059-7a27-4025-bb3b-f16acabb118b-cilium-config-path\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263438 kubelet[2010]: I0702 07:44:26.263440 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-run\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263455 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-xtables-lock\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263497 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-bpf-maps\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263514 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/598b6e79-5148-46f5-adbf-de90bc0bbb13-cilium-config-path\") pod \"598b6e79-5148-46f5-adbf-de90bc0bbb13\" (UID: \"598b6e79-5148-46f5-adbf-de90bc0bbb13\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263531 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn6wb\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-kube-api-access-pn6wb\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263543 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-lib-modules\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.263553 kubelet[2010]: I0702 07:44:26.263539 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263556 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-hubble-tls\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263571 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhlg9\" (UniqueName: \"kubernetes.io/projected/598b6e79-5148-46f5-adbf-de90bc0bbb13-kube-api-access-lhlg9\") pod \"598b6e79-5148-46f5-adbf-de90bc0bbb13\" (UID: \"598b6e79-5148-46f5-adbf-de90bc0bbb13\") " Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263583 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-kernel\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263595 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-net\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263606 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-hostproc\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264193 kubelet[2010]: I0702 07:44:26.263618 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-etc-cni-netd\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263631 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cni-path\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263645 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-cgroup\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263664 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37233059-7a27-4025-bb3b-f16acabb118b-clustermesh-secrets\") pod \"37233059-7a27-4025-bb3b-f16acabb118b\" (UID: \"37233059-7a27-4025-bb3b-f16acabb118b\") " Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263702 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263741 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.264473 kubelet[2010]: I0702 07:44:26.263770 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266444 kubelet[2010]: I0702 07:44:26.265933 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37233059-7a27-4025-bb3b-f16acabb118b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:44:26.266444 kubelet[2010]: I0702 07:44:26.265984 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266444 kubelet[2010]: I0702 07:44:26.265999 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266444 kubelet[2010]: I0702 07:44:26.266012 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-hostproc" (OuterVolumeSpecName: "hostproc") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266444 kubelet[2010]: I0702 07:44:26.266026 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cni-path" (OuterVolumeSpecName: "cni-path") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266671 kubelet[2010]: I0702 07:44:26.266045 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266671 kubelet[2010]: I0702 07:44:26.266128 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37233059-7a27-4025-bb3b-f16acabb118b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:44:26.266671 kubelet[2010]: I0702 07:44:26.266312 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266671 kubelet[2010]: I0702 07:44:26.266331 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:26.266671 kubelet[2010]: I0702 07:44:26.266592 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598b6e79-5148-46f5-adbf-de90bc0bbb13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "598b6e79-5148-46f5-adbf-de90bc0bbb13" (UID: "598b6e79-5148-46f5-adbf-de90bc0bbb13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:44:26.267519 kubelet[2010]: I0702 07:44:26.267477 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-kube-api-access-pn6wb" (OuterVolumeSpecName: "kube-api-access-pn6wb") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "kube-api-access-pn6wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:44:26.267963 kubelet[2010]: I0702 07:44:26.267934 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598b6e79-5148-46f5-adbf-de90bc0bbb13-kube-api-access-lhlg9" (OuterVolumeSpecName: "kube-api-access-lhlg9") pod "598b6e79-5148-46f5-adbf-de90bc0bbb13" (UID: "598b6e79-5148-46f5-adbf-de90bc0bbb13"). InnerVolumeSpecName "kube-api-access-lhlg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:44:26.268631 kubelet[2010]: I0702 07:44:26.268604 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "37233059-7a27-4025-bb3b-f16acabb118b" (UID: "37233059-7a27-4025-bb3b-f16acabb118b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:44:26.364604 kubelet[2010]: I0702 07:44:26.364573 2010 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364604 kubelet[2010]: I0702 07:44:26.364597 2010 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364604 kubelet[2010]: I0702 07:44:26.364606 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37233059-7a27-4025-bb3b-f16acabb118b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364604 kubelet[2010]: I0702 07:44:26.364616 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/598b6e79-5148-46f5-adbf-de90bc0bbb13-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364623 2010 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364630 2010 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pn6wb\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-kube-api-access-pn6wb\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364637 2010 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lhlg9\" (UniqueName: \"kubernetes.io/projected/598b6e79-5148-46f5-adbf-de90bc0bbb13-kube-api-access-lhlg9\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364644 2010 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364654 2010 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364667 2010 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37233059-7a27-4025-bb3b-f16acabb118b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364682 2010 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.364931 kubelet[2010]: I0702 07:44:26.364705 2010 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.365252 kubelet[2010]: I0702 07:44:26.364714 2010 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.365252 kubelet[2010]: I0702 07:44:26.364722 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37233059-7a27-4025-bb3b-f16acabb118b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.365252 kubelet[2010]: I0702 07:44:26.364731 2010 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37233059-7a27-4025-bb3b-f16acabb118b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:26.521629 systemd[1]: Removed slice kubepods-burstable-pod37233059_7a27_4025_bb3b_f16acabb118b.slice. Jul 2 07:44:26.521717 systemd[1]: kubepods-burstable-pod37233059_7a27_4025_bb3b_f16acabb118b.slice: Consumed 6.281s CPU time. Jul 2 07:44:26.525242 systemd[1]: Removed slice kubepods-besteffort-pod598b6e79_5148_46f5_adbf_de90bc0bbb13.slice. Jul 2 07:44:26.890049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81233b423d1b0c23b560464f1ace026c71f6f892ffc9122a527dfe1cb8cda58a-rootfs.mount: Deactivated successfully. Jul 2 07:44:26.890142 systemd[1]: var-lib-kubelet-pods-37233059\x2d7a27\x2d4025\x2dbb3b\x2df16acabb118b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpn6wb.mount: Deactivated successfully. Jul 2 07:44:26.890196 systemd[1]: var-lib-kubelet-pods-37233059\x2d7a27\x2d4025\x2dbb3b\x2df16acabb118b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:44:26.890252 systemd[1]: var-lib-kubelet-pods-37233059\x2d7a27\x2d4025\x2dbb3b\x2df16acabb118b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:44:26.890299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17eb5c3087fc8b060c8d9c06617d8c3d642f181e22fafe302e6fac339f71f8c7-rootfs.mount: Deactivated successfully. Jul 2 07:44:26.890341 systemd[1]: var-lib-kubelet-pods-598b6e79\x2d5148\x2d46f5\x2dadbf\x2dde90bc0bbb13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhlg9.mount: Deactivated successfully. Jul 2 07:44:27.060235 kubelet[2010]: I0702 07:44:27.060197 2010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37233059-7a27-4025-bb3b-f16acabb118b" path="/var/lib/kubelet/pods/37233059-7a27-4025-bb3b-f16acabb118b/volumes" Jul 2 07:44:27.060791 kubelet[2010]: I0702 07:44:27.060763 2010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598b6e79-5148-46f5-adbf-de90bc0bbb13" path="/var/lib/kubelet/pods/598b6e79-5148-46f5-adbf-de90bc0bbb13/volumes" Jul 2 07:44:27.857612 sshd[3634]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:27.860561 systemd[1]: sshd@24-10.0.0.17:22-10.0.0.1:47536.service: Deactivated successfully. Jul 2 07:44:27.861102 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:44:27.861642 systemd-logind[1178]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:44:27.862668 systemd[1]: Started sshd@25-10.0.0.17:22-10.0.0.1:47538.service. Jul 2 07:44:27.863331 systemd-logind[1178]: Removed session 25. Jul 2 07:44:27.903858 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 47538 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:27.904945 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:27.908281 systemd-logind[1178]: New session 26 of user core. Jul 2 07:44:27.909257 systemd[1]: Started session-26.scope. Jul 2 07:44:28.097693 kubelet[2010]: E0702 07:44:28.097649 2010 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:44:28.471595 sshd[3791]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:28.474289 systemd[1]: Started sshd@26-10.0.0.17:22-10.0.0.1:47548.service. Jul 2 07:44:28.478154 kubelet[2010]: I0702 07:44:28.478130 2010 topology_manager.go:215] "Topology Admit Handler" podUID="73c65c62-ab19-4acc-accc-4f47b197e425" podNamespace="kube-system" podName="cilium-lh8w4" Jul 2 07:44:28.478279 kubelet[2010]: E0702 07:44:28.478265 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="apply-sysctl-overwrites" Jul 2 07:44:28.478379 kubelet[2010]: E0702 07:44:28.478366 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="598b6e79-5148-46f5-adbf-de90bc0bbb13" containerName="cilium-operator" Jul 2 07:44:28.478477 kubelet[2010]: E0702 07:44:28.478463 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="mount-cgroup" Jul 2 07:44:28.478571 kubelet[2010]: E0702 07:44:28.478558 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="mount-bpf-fs" Jul 2 07:44:28.478668 kubelet[2010]: E0702 07:44:28.478654 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="clean-cilium-state" Jul 2 07:44:28.478762 kubelet[2010]: E0702 07:44:28.478749 2010 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="cilium-agent" Jul 2 07:44:28.478868 kubelet[2010]: I0702 07:44:28.478854 2010 memory_manager.go:354] "RemoveStaleState removing state" podUID="598b6e79-5148-46f5-adbf-de90bc0bbb13" containerName="cilium-operator" Jul 2 07:44:28.478956 kubelet[2010]: I0702 07:44:28.478943 2010 memory_manager.go:354] "RemoveStaleState removing state" podUID="37233059-7a27-4025-bb3b-f16acabb118b" containerName="cilium-agent" Jul 2 07:44:28.483635 systemd[1]: Created slice kubepods-burstable-pod73c65c62_ab19_4acc_accc_4f47b197e425.slice. Jul 2 07:44:28.488666 systemd[1]: sshd@25-10.0.0.17:22-10.0.0.1:47538.service: Deactivated successfully. Jul 2 07:44:28.489273 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:44:28.491006 systemd-logind[1178]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:44:28.491863 systemd-logind[1178]: Removed session 26. Jul 2 07:44:28.517276 sshd[3802]: Accepted publickey for core from 10.0.0.1 port 47548 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:28.518316 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:28.521760 systemd-logind[1178]: New session 27 of user core. Jul 2 07:44:28.522509 systemd[1]: Started session-27.scope. Jul 2 07:44:28.577609 kubelet[2010]: I0702 07:44:28.577556 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-hostproc\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577659 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-hubble-tls\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577709 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-bpf-maps\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577727 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-etc-cni-netd\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577787 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-ipsec-secrets\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577817 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-config-path\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.577899 kubelet[2010]: I0702 07:44:28.577854 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cni-path\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577874 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-xtables-lock\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577888 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-kernel\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577902 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpg7k\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-kube-api-access-jpg7k\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577915 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-run\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577930 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-cgroup\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578039 kubelet[2010]: I0702 07:44:28.577945 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-lib-modules\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578183 kubelet[2010]: I0702 07:44:28.577959 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-net\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.578183 kubelet[2010]: I0702 07:44:28.577976 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-clustermesh-secrets\") pod \"cilium-lh8w4\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " pod="kube-system/cilium-lh8w4" Jul 2 07:44:28.633251 sshd[3802]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:28.636544 systemd[1]: Started sshd@27-10.0.0.17:22-10.0.0.1:47550.service. Jul 2 07:44:28.637108 systemd[1]: sshd@26-10.0.0.17:22-10.0.0.1:47548.service: Deactivated successfully. Jul 2 07:44:28.638453 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 07:44:28.640915 kubelet[2010]: E0702 07:44:28.640878 2010 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-jpg7k lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-lh8w4" podUID="73c65c62-ab19-4acc-accc-4f47b197e425" Jul 2 07:44:28.643133 systemd-logind[1178]: Session 27 logged out. Waiting for processes to exit. Jul 2 07:44:28.646093 systemd-logind[1178]: Removed session 27. Jul 2 07:44:28.676981 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 47550 ssh2: RSA SHA256:p62DhCk3U7EnSkbc61VMtskngsC7N1IbxGsp88pYwVo Jul 2 07:44:28.677848 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:44:28.685161 systemd[1]: Started session-28.scope. Jul 2 07:44:28.685555 systemd-logind[1178]: New session 28 of user core. Jul 2 07:44:29.058680 kubelet[2010]: E0702 07:44:29.058636 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:29.282293 kubelet[2010]: I0702 07:44:29.282246 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpg7k\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-kube-api-access-jpg7k\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282293 kubelet[2010]: I0702 07:44:29.282279 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-etc-cni-netd\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282293 kubelet[2010]: I0702 07:44:29.282297 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-clustermesh-secrets\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282312 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-lib-modules\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282325 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-net\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282340 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-hostproc\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282353 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-xtables-lock\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282368 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-ipsec-secrets\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282675 kubelet[2010]: I0702 07:44:29.282370 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282383 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-run\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282408 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282447 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-cgroup\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282485 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-bpf-maps\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282511 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-hubble-tls\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282836 kubelet[2010]: I0702 07:44:29.282532 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-config-path\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282551 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cni-path\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282568 2010 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-kernel\") pod \"73c65c62-ab19-4acc-accc-4f47b197e425\" (UID: \"73c65c62-ab19-4acc-accc-4f47b197e425\") " Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282604 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282627 2010 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282649 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.282971 kubelet[2010]: I0702 07:44:29.282671 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.283169 kubelet[2010]: I0702 07:44:29.282688 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.283169 kubelet[2010]: I0702 07:44:29.282721 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-hostproc" (OuterVolumeSpecName: "hostproc") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.283169 kubelet[2010]: I0702 07:44:29.282754 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.283169 kubelet[2010]: I0702 07:44:29.282773 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.284825 kubelet[2010]: I0702 07:44:29.284792 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:44:29.284873 kubelet[2010]: I0702 07:44:29.284834 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cni-path" (OuterVolumeSpecName: "cni-path") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.284873 kubelet[2010]: I0702 07:44:29.284856 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:44:29.285670 kubelet[2010]: I0702 07:44:29.285605 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-kube-api-access-jpg7k" (OuterVolumeSpecName: "kube-api-access-jpg7k") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "kube-api-access-jpg7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:44:29.285942 systemd[1]: var-lib-kubelet-pods-73c65c62\x2dab19\x2d4acc\x2daccc\x2d4f47b197e425-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:44:29.287727 kubelet[2010]: I0702 07:44:29.287672 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:44:29.287798 kubelet[2010]: I0702 07:44:29.287788 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:44:29.287838 kubelet[2010]: I0702 07:44:29.287818 2010 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "73c65c62-ab19-4acc-accc-4f47b197e425" (UID: "73c65c62-ab19-4acc-accc-4f47b197e425"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383688 2010 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383719 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383736 2010 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383752 2010 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383762 2010 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jpg7k\" (UniqueName: \"kubernetes.io/projected/73c65c62-ab19-4acc-accc-4f47b197e425-kube-api-access-jpg7k\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.383777 kubelet[2010]: I0702 07:44:29.383772 2010 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383782 2010 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383791 2010 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383800 2010 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383812 2010 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383821 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383830 2010 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.384001 kubelet[2010]: I0702 07:44:29.383839 2010 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73c65c62-ab19-4acc-accc-4f47b197e425-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:44:29.683229 systemd[1]: var-lib-kubelet-pods-73c65c62\x2dab19\x2d4acc\x2daccc\x2d4f47b197e425-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:44:29.683315 systemd[1]: var-lib-kubelet-pods-73c65c62\x2dab19\x2d4acc\x2daccc\x2d4f47b197e425-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djpg7k.mount: Deactivated successfully. Jul 2 07:44:29.683380 systemd[1]: var-lib-kubelet-pods-73c65c62\x2dab19\x2d4acc\x2daccc\x2d4f47b197e425-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:44:30.230567 systemd[1]: Removed slice kubepods-burstable-pod73c65c62_ab19_4acc_accc_4f47b197e425.slice. Jul 2 07:44:30.251419 kubelet[2010]: I0702 07:44:30.251358 2010 topology_manager.go:215] "Topology Admit Handler" podUID="c7e5535f-f59d-4e2a-8e6e-78b8d4401d88" podNamespace="kube-system" podName="cilium-vxv9q" Jul 2 07:44:30.256259 systemd[1]: Created slice kubepods-burstable-podc7e5535f_f59d_4e2a_8e6e_78b8d4401d88.slice. Jul 2 07:44:30.288338 kubelet[2010]: I0702 07:44:30.288271 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-xtables-lock\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288338 kubelet[2010]: I0702 07:44:30.288323 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-cni-path\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288338 kubelet[2010]: I0702 07:44:30.288343 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-etc-cni-netd\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288797 kubelet[2010]: I0702 07:44:30.288365 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-host-proc-sys-kernel\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288797 kubelet[2010]: I0702 07:44:30.288381 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd955\" (UniqueName: \"kubernetes.io/projected/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-kube-api-access-xd955\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288797 kubelet[2010]: I0702 07:44:30.288400 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-cilium-ipsec-secrets\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288797 kubelet[2010]: I0702 07:44:30.288420 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-hostproc\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288797 kubelet[2010]: I0702 07:44:30.288438 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-cilium-cgroup\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288461 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-cilium-config-path\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288476 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-clustermesh-secrets\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288490 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-hubble-tls\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288505 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-bpf-maps\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288529 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-lib-modules\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.288935 kubelet[2010]: I0702 07:44:30.288543 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-host-proc-sys-net\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.289103 kubelet[2010]: I0702 07:44:30.288570 2010 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7e5535f-f59d-4e2a-8e6e-78b8d4401d88-cilium-run\") pod \"cilium-vxv9q\" (UID: \"c7e5535f-f59d-4e2a-8e6e-78b8d4401d88\") " pod="kube-system/cilium-vxv9q" Jul 2 07:44:30.559320 kubelet[2010]: E0702 07:44:30.559280 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:30.560349 env[1189]: time="2024-07-02T07:44:30.560313824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxv9q,Uid:c7e5535f-f59d-4e2a-8e6e-78b8d4401d88,Namespace:kube-system,Attempt:0,}" Jul 2 07:44:30.572704 env[1189]: time="2024-07-02T07:44:30.572641715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:44:30.572704 env[1189]: time="2024-07-02T07:44:30.572685728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:44:30.572704 env[1189]: time="2024-07-02T07:44:30.572696468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:44:30.573032 env[1189]: time="2024-07-02T07:44:30.572972846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf pid=3847 runtime=io.containerd.runc.v2 Jul 2 07:44:30.582615 systemd[1]: Started cri-containerd-f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf.scope. Jul 2 07:44:30.600805 env[1189]: time="2024-07-02T07:44:30.600755215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxv9q,Uid:c7e5535f-f59d-4e2a-8e6e-78b8d4401d88,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\"" Jul 2 07:44:30.601295 kubelet[2010]: E0702 07:44:30.601266 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:30.604298 env[1189]: time="2024-07-02T07:44:30.604252973Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:44:30.621491 env[1189]: time="2024-07-02T07:44:30.621441828Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95\"" Jul 2 07:44:30.621963 env[1189]: time="2024-07-02T07:44:30.621929238Z" level=info msg="StartContainer for \"9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95\"" Jul 2 07:44:30.635266 systemd[1]: Started cri-containerd-9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95.scope. Jul 2 07:44:30.657992 env[1189]: time="2024-07-02T07:44:30.657943199Z" level=info msg="StartContainer for \"9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95\" returns successfully" Jul 2 07:44:30.663343 systemd[1]: cri-containerd-9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95.scope: Deactivated successfully. Jul 2 07:44:30.690055 env[1189]: time="2024-07-02T07:44:30.690004636Z" level=info msg="shim disconnected" id=9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95 Jul 2 07:44:30.690055 env[1189]: time="2024-07-02T07:44:30.690054090Z" level=warning msg="cleaning up after shim disconnected" id=9cb0bbce5eb5dcd7bf58a700e2d0354f33b0b1d89bc8b7629c3c6860d7c34b95 namespace=k8s.io Jul 2 07:44:30.690055 env[1189]: time="2024-07-02T07:44:30.690063678Z" level=info msg="cleaning up dead shim" Jul 2 07:44:30.696284 env[1189]: time="2024-07-02T07:44:30.696241654Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3931 runtime=io.containerd.runc.v2\n" Jul 2 07:44:31.059933 kubelet[2010]: I0702 07:44:31.059884 2010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c65c62-ab19-4acc-accc-4f47b197e425" path="/var/lib/kubelet/pods/73c65c62-ab19-4acc-accc-4f47b197e425/volumes" Jul 2 07:44:31.233892 kubelet[2010]: E0702 07:44:31.233852 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:31.235797 env[1189]: time="2024-07-02T07:44:31.235757243Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:44:31.623361 env[1189]: time="2024-07-02T07:44:31.623309688Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4\"" Jul 2 07:44:31.623875 env[1189]: time="2024-07-02T07:44:31.623831973Z" level=info msg="StartContainer for \"dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4\"" Jul 2 07:44:31.638845 systemd[1]: Started cri-containerd-dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4.scope. Jul 2 07:44:31.658331 env[1189]: time="2024-07-02T07:44:31.658278751Z" level=info msg="StartContainer for \"dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4\" returns successfully" Jul 2 07:44:31.662554 systemd[1]: cri-containerd-dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4.scope: Deactivated successfully. Jul 2 07:44:31.680427 env[1189]: time="2024-07-02T07:44:31.680385405Z" level=info msg="shim disconnected" id=dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4 Jul 2 07:44:31.680553 env[1189]: time="2024-07-02T07:44:31.680431583Z" level=warning msg="cleaning up after shim disconnected" id=dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4 namespace=k8s.io Jul 2 07:44:31.680553 env[1189]: time="2024-07-02T07:44:31.680441541Z" level=info msg="cleaning up dead shim" Jul 2 07:44:31.683519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd1719f8446b726c07a25b974665560104c81c369bd82b5e06a2cdd3ca8e4bf4-rootfs.mount: Deactivated successfully. Jul 2 07:44:31.687235 env[1189]: time="2024-07-02T07:44:31.687183729Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3992 runtime=io.containerd.runc.v2\n" Jul 2 07:44:32.236829 kubelet[2010]: E0702 07:44:32.236802 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:32.238676 env[1189]: time="2024-07-02T07:44:32.238642964Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:44:32.251533 env[1189]: time="2024-07-02T07:44:32.251481097Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd\"" Jul 2 07:44:32.253098 env[1189]: time="2024-07-02T07:44:32.252023821Z" level=info msg="StartContainer for \"074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd\"" Jul 2 07:44:32.268575 systemd[1]: Started cri-containerd-074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd.scope. Jul 2 07:44:32.288675 env[1189]: time="2024-07-02T07:44:32.288626140Z" level=info msg="StartContainer for \"074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd\" returns successfully" Jul 2 07:44:32.289524 systemd[1]: cri-containerd-074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd.scope: Deactivated successfully. Jul 2 07:44:32.311285 env[1189]: time="2024-07-02T07:44:32.311230486Z" level=info msg="shim disconnected" id=074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd Jul 2 07:44:32.311285 env[1189]: time="2024-07-02T07:44:32.311282214Z" level=warning msg="cleaning up after shim disconnected" id=074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd namespace=k8s.io Jul 2 07:44:32.311500 env[1189]: time="2024-07-02T07:44:32.311294077Z" level=info msg="cleaning up dead shim" Jul 2 07:44:32.317261 env[1189]: time="2024-07-02T07:44:32.317231797Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" Jul 2 07:44:32.683484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-074f562bad58d5749e3a5a27a5e5ab87395e231ea427ba52ac7290c86426f3fd-rootfs.mount: Deactivated successfully. Jul 2 07:44:33.098994 kubelet[2010]: E0702 07:44:33.098960 2010 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:44:33.239167 kubelet[2010]: E0702 07:44:33.239144 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:33.240849 env[1189]: time="2024-07-02T07:44:33.240810291Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:44:33.254349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525625180.mount: Deactivated successfully. Jul 2 07:44:33.257741 env[1189]: time="2024-07-02T07:44:33.257689971Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448\"" Jul 2 07:44:33.258175 env[1189]: time="2024-07-02T07:44:33.258145298Z" level=info msg="StartContainer for \"5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448\"" Jul 2 07:44:33.270272 systemd[1]: Started cri-containerd-5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448.scope. Jul 2 07:44:33.291857 systemd[1]: cri-containerd-5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448.scope: Deactivated successfully. Jul 2 07:44:33.292771 env[1189]: time="2024-07-02T07:44:33.292741019Z" level=info msg="StartContainer for \"5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448\" returns successfully" Jul 2 07:44:33.312611 env[1189]: time="2024-07-02T07:44:33.312563382Z" level=info msg="shim disconnected" id=5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448 Jul 2 07:44:33.312611 env[1189]: time="2024-07-02T07:44:33.312609942Z" level=warning msg="cleaning up after shim disconnected" id=5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448 namespace=k8s.io Jul 2 07:44:33.312812 env[1189]: time="2024-07-02T07:44:33.312618247Z" level=info msg="cleaning up dead shim" Jul 2 07:44:33.318728 env[1189]: time="2024-07-02T07:44:33.318687333Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4101 runtime=io.containerd.runc.v2\n" Jul 2 07:44:33.683624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a0eac6d99155f1efcd6e95960f8b28bab96e46f8706d2c607fbd9e79fc02448-rootfs.mount: Deactivated successfully. Jul 2 07:44:34.242586 kubelet[2010]: E0702 07:44:34.242546 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:34.244611 env[1189]: time="2024-07-02T07:44:34.244564427Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:44:34.459191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562501182.mount: Deactivated successfully. Jul 2 07:44:34.477612 env[1189]: time="2024-07-02T07:44:34.477541659Z" level=info msg="CreateContainer within sandbox \"f3eb179d1372b34fef9578b332d88eed05781315cdb9d01b7e63e1d96e3a5acf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f\"" Jul 2 07:44:34.478157 env[1189]: time="2024-07-02T07:44:34.478116353Z" level=info msg="StartContainer for \"1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f\"" Jul 2 07:44:34.495063 systemd[1]: Started cri-containerd-1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f.scope. Jul 2 07:44:34.525094 env[1189]: time="2024-07-02T07:44:34.522498369Z" level=info msg="StartContainer for \"1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f\" returns successfully" Jul 2 07:44:34.539359 kubelet[2010]: I0702 07:44:34.538898 2010 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:44:34Z","lastTransitionTime":"2024-07-02T07:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:44:34.683826 systemd[1]: run-containerd-runc-k8s.io-1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f-runc.qh0mWJ.mount: Deactivated successfully. Jul 2 07:44:34.777106 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:44:35.246194 kubelet[2010]: E0702 07:44:35.246165 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:35.255604 kubelet[2010]: I0702 07:44:35.255549 2010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vxv9q" podStartSLOduration=5.255532615 podStartE2EDuration="5.255532615s" podCreationTimestamp="2024-07-02 07:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:44:35.255365798 +0000 UTC m=+92.268463700" watchObservedRunningTime="2024-07-02 07:44:35.255532615 +0000 UTC m=+92.268630516" Jul 2 07:44:36.560584 kubelet[2010]: E0702 07:44:36.560543 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:37.208783 systemd-networkd[1016]: lxc_health: Link UP Jul 2 07:44:37.218415 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:44:37.216273 systemd-networkd[1016]: lxc_health: Gained carrier Jul 2 07:44:38.058629 kubelet[2010]: E0702 07:44:38.058600 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:38.561669 kubelet[2010]: E0702 07:44:38.561635 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:39.015841 systemd[1]: run-containerd-runc-k8s.io-1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f-runc.AYrA1l.mount: Deactivated successfully. Jul 2 07:44:39.034214 systemd-networkd[1016]: lxc_health: Gained IPv6LL Jul 2 07:44:39.253093 kubelet[2010]: E0702 07:44:39.253038 2010 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:44:41.097871 systemd[1]: run-containerd-runc-k8s.io-1961882334d5846b0d3fb4a36acc7a6abcb8e23ad7d08c187ef6972413d2aa8f-runc.53UXs7.mount: Deactivated successfully. Jul 2 07:44:43.208986 sshd[3815]: pam_unix(sshd:session): session closed for user core Jul 2 07:44:43.210847 systemd[1]: sshd@27-10.0.0.17:22-10.0.0.1:47550.service: Deactivated successfully. Jul 2 07:44:43.211446 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 07:44:43.211991 systemd-logind[1178]: Session 28 logged out. Waiting for processes to exit. Jul 2 07:44:43.212728 systemd-logind[1178]: Removed session 28.