Jul 2 00:27:36.948603 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:27:36.948625 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:27:36.948635 kernel: BIOS-provided physical RAM map: Jul 2 00:27:36.948642 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:27:36.948648 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:27:36.948654 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:27:36.948661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 00:27:36.948667 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 00:27:36.948673 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:27:36.948682 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:27:36.948688 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 00:27:36.948694 kernel: NX (Execute Disable) protection: active Jul 2 00:27:36.948700 kernel: APIC: Static calls initialized Jul 2 00:27:36.948707 kernel: SMBIOS 2.8 present. Jul 2 00:27:36.948715 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 00:27:36.948724 kernel: Hypervisor detected: KVM Jul 2 00:27:36.948730 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:27:36.948737 kernel: kvm-clock: using sched offset of 2288678192 cycles Jul 2 00:27:36.948744 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:27:36.948752 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:27:36.948759 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:27:36.948766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:27:36.948773 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 00:27:36.948780 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:27:36.948789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:27:36.948796 kernel: Using GB pages for direct mapping Jul 2 00:27:36.948803 kernel: ACPI: Early table checksum verification disabled Jul 2 00:27:36.948810 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 00:27:36.948817 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948824 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948831 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948838 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 00:27:36.948853 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948873 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948880 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:27:36.948887 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 00:27:36.948900 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 00:27:36.948907 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 00:27:36.948921 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 00:27:36.948939 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 00:27:36.948957 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 00:27:36.948973 kernel: No NUMA configuration found Jul 2 00:27:36.948981 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 00:27:36.949000 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 00:27:36.949015 kernel: Zone ranges: Jul 2 00:27:36.949029 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:27:36.949036 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 00:27:36.949046 kernel: Normal empty Jul 2 00:27:36.949066 kernel: Movable zone start for each node Jul 2 00:27:36.949073 kernel: Early memory node ranges Jul 2 00:27:36.949080 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:27:36.949088 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 00:27:36.949095 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 00:27:36.949102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:27:36.949109 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:27:36.949116 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 00:27:36.949126 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:27:36.949133 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:27:36.949141 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:27:36.949148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:27:36.949155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:27:36.949163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:27:36.949170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:27:36.949177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:27:36.949184 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:27:36.949191 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:27:36.949201 kernel: TSC deadline timer available Jul 2 00:27:36.949208 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:27:36.949215 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:27:36.949222 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:27:36.949229 kernel: kvm-guest: setup PV sched yield Jul 2 00:27:36.949248 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 00:27:36.949256 kernel: Booting paravirtualized kernel on KVM Jul 2 00:27:36.949264 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:27:36.949274 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:27:36.949289 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:27:36.949299 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:27:36.949308 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:27:36.949316 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:27:36.949323 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:27:36.949332 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:27:36.949339 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:27:36.949347 kernel: random: crng init done Jul 2 00:27:36.949356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:27:36.949364 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:27:36.949371 kernel: Fallback order for Node 0: 0 Jul 2 00:27:36.949378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 00:27:36.949385 kernel: Policy zone: DMA32 Jul 2 00:27:36.949393 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:27:36.949400 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 143044K reserved, 0K cma-reserved) Jul 2 00:27:36.949408 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:27:36.949415 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:27:36.949424 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:27:36.949431 kernel: Dynamic Preempt: voluntary Jul 2 00:27:36.949439 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:27:36.949446 kernel: rcu: RCU event tracing is enabled. Jul 2 00:27:36.949454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:27:36.949461 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:27:36.949468 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:27:36.949476 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:27:36.949483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:27:36.949492 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:27:36.949499 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:27:36.949507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:27:36.949514 kernel: Console: colour VGA+ 80x25 Jul 2 00:27:36.949521 kernel: printk: console [ttyS0] enabled Jul 2 00:27:36.949528 kernel: ACPI: Core revision 20230628 Jul 2 00:27:36.949535 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:27:36.949543 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:27:36.949550 kernel: x2apic enabled Jul 2 00:27:36.949557 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:27:36.949566 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:27:36.949574 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:27:36.949581 kernel: kvm-guest: setup PV IPIs Jul 2 00:27:36.949588 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:27:36.949595 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:27:36.949603 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:27:36.949610 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:27:36.949627 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:27:36.949634 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:27:36.949642 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:27:36.949649 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:27:36.949659 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:27:36.949666 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:27:36.949674 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:27:36.949681 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:27:36.949689 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:27:36.949699 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:27:36.949706 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:27:36.949714 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:27:36.949722 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:27:36.949730 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:27:36.949737 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:27:36.949745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:27:36.949752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:27:36.949762 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:27:36.949770 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:27:36.949777 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:27:36.949785 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:27:36.949792 kernel: SELinux: Initializing. Jul 2 00:27:36.949800 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:27:36.949807 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:27:36.949815 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:27:36.949823 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:27:36.949833 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:27:36.949847 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:27:36.949855 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:27:36.949863 kernel: ... version: 0 Jul 2 00:27:36.949870 kernel: ... bit width: 48 Jul 2 00:27:36.949878 kernel: ... generic registers: 6 Jul 2 00:27:36.949885 kernel: ... value mask: 0000ffffffffffff Jul 2 00:27:36.949893 kernel: ... max period: 00007fffffffffff Jul 2 00:27:36.949900 kernel: ... fixed-purpose events: 0 Jul 2 00:27:36.949910 kernel: ... event mask: 000000000000003f Jul 2 00:27:36.949918 kernel: signal: max sigframe size: 1776 Jul 2 00:27:36.949925 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:27:36.949933 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:27:36.949940 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:27:36.949948 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:27:36.949955 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:27:36.949963 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:27:36.949970 kernel: smpboot: Max logical packages: 1 Jul 2 00:27:36.949980 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:27:36.949988 kernel: devtmpfs: initialized Jul 2 00:27:36.949995 kernel: x86/mm: Memory block size: 128MB Jul 2 00:27:36.950003 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:27:36.950011 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:27:36.950018 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:27:36.950026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:27:36.950033 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:27:36.950041 kernel: audit: type=2000 audit(1719880056.124:1): state=initialized audit_enabled=0 res=1 Jul 2 00:27:36.950051 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:27:36.950058 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:27:36.950066 kernel: cpuidle: using governor menu Jul 2 00:27:36.950073 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:27:36.950081 kernel: dca service started, version 1.12.1 Jul 2 00:27:36.950088 kernel: PCI: Using configuration type 1 for base access Jul 2 00:27:36.950096 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:27:36.950104 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:27:36.950111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:27:36.950121 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:27:36.950128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:27:36.950136 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:27:36.950144 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:27:36.950154 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:27:36.950165 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:27:36.950173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:27:36.950181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:27:36.950188 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:27:36.950196 kernel: ACPI: Interpreter enabled Jul 2 00:27:36.950206 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:27:36.950213 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:27:36.950221 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:27:36.950229 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:27:36.950247 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:27:36.950254 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:27:36.950435 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:27:36.950452 kernel: acpiphp: Slot [3] registered Jul 2 00:27:36.950459 kernel: acpiphp: Slot [4] registered Jul 2 00:27:36.950467 kernel: acpiphp: Slot [5] registered Jul 2 00:27:36.950474 kernel: acpiphp: Slot [6] registered Jul 2 00:27:36.950482 kernel: acpiphp: Slot [7] registered Jul 2 00:27:36.950489 kernel: acpiphp: Slot [8] registered Jul 2 00:27:36.950497 kernel: acpiphp: Slot [9] registered Jul 2 00:27:36.950504 kernel: acpiphp: Slot [10] registered Jul 2 00:27:36.950512 kernel: acpiphp: Slot [11] registered Jul 2 00:27:36.950519 kernel: acpiphp: Slot [12] registered Jul 2 00:27:36.950529 kernel: acpiphp: Slot [13] registered Jul 2 00:27:36.950536 kernel: acpiphp: Slot [14] registered Jul 2 00:27:36.950544 kernel: acpiphp: Slot [15] registered Jul 2 00:27:36.950551 kernel: acpiphp: Slot [16] registered Jul 2 00:27:36.950558 kernel: acpiphp: Slot [17] registered Jul 2 00:27:36.950566 kernel: acpiphp: Slot [18] registered Jul 2 00:27:36.950573 kernel: acpiphp: Slot [19] registered Jul 2 00:27:36.950584 kernel: acpiphp: Slot [20] registered Jul 2 00:27:36.950605 kernel: acpiphp: Slot [21] registered Jul 2 00:27:36.950621 kernel: acpiphp: Slot [22] registered Jul 2 00:27:36.950633 kernel: acpiphp: Slot [23] registered Jul 2 00:27:36.950648 kernel: acpiphp: Slot [24] registered Jul 2 00:27:36.950661 kernel: acpiphp: Slot [25] registered Jul 2 00:27:36.950677 kernel: acpiphp: Slot [26] registered Jul 2 00:27:36.950689 kernel: acpiphp: Slot [27] registered Jul 2 00:27:36.950701 kernel: acpiphp: Slot [28] registered Jul 2 00:27:36.950714 kernel: acpiphp: Slot [29] registered Jul 2 00:27:36.950729 kernel: acpiphp: Slot [30] registered Jul 2 00:27:36.950741 kernel: acpiphp: Slot [31] registered Jul 2 00:27:36.950753 kernel: PCI host bridge to bus 0000:00 Jul 2 00:27:36.950898 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:27:36.951053 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:27:36.951175 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:27:36.951302 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:27:36.951413 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:27:36.951523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:27:36.951664 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:27:36.951802 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:27:36.951950 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:27:36.952072 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:27:36.952198 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:27:36.952341 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:27:36.952474 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:27:36.952603 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:27:36.952769 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:27:36.952906 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:27:36.953026 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:27:36.953155 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:27:36.953291 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 00:27:36.953417 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 00:27:36.953551 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 00:27:36.953677 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:27:36.953809 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:27:36.953941 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:27:36.954122 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 00:27:36.954301 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 00:27:36.954431 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:27:36.954570 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:27:36.954699 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 00:27:36.954822 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 00:27:36.954961 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:27:36.955083 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:27:36.955208 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 00:27:36.955346 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 00:27:36.955477 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 00:27:36.955489 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:27:36.955497 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:27:36.955504 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:27:36.955512 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:27:36.955520 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:27:36.955527 kernel: iommu: Default domain type: Translated Jul 2 00:27:36.955540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:27:36.955547 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:27:36.955555 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:27:36.955562 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:27:36.955570 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 00:27:36.955690 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:27:36.955811 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:27:36.955940 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:27:36.955955 kernel: vgaarb: loaded Jul 2 00:27:36.955963 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:27:36.955971 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:27:36.955978 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:27:36.955986 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:27:36.955994 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:27:36.956002 kernel: pnp: PnP ACPI init Jul 2 00:27:36.956130 kernel: pnp 00:02: [dma 2] Jul 2 00:27:36.956144 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:27:36.956152 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:27:36.956160 kernel: NET: Registered PF_INET protocol family Jul 2 00:27:36.956168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:27:36.956176 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:27:36.956183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:27:36.956191 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:27:36.956199 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:27:36.956206 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:27:36.956216 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:27:36.956224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:27:36.956298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:27:36.956307 kernel: NET: Registered PF_XDP protocol family Jul 2 00:27:36.956423 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:27:36.956531 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:27:36.956640 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:27:36.956747 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:27:36.956864 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:27:36.956989 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:27:36.957109 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:27:36.957119 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:27:36.957127 kernel: Initialise system trusted keyrings Jul 2 00:27:36.957135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:27:36.957143 kernel: Key type asymmetric registered Jul 2 00:27:36.957150 kernel: Asymmetric key parser 'x509' registered Jul 2 00:27:36.957158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:27:36.957168 kernel: io scheduler mq-deadline registered Jul 2 00:27:36.957176 kernel: io scheduler kyber registered Jul 2 00:27:36.957184 kernel: io scheduler bfq registered Jul 2 00:27:36.957191 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:27:36.957199 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:27:36.957207 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:27:36.957215 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:27:36.957223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:27:36.957242 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:27:36.957253 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:27:36.957260 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:27:36.957268 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:27:36.957399 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:27:36.957513 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:27:36.957524 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:27:36.957634 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:27:36 UTC (1719880056) Jul 2 00:27:36.957746 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:27:36.957759 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:27:36.957767 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:27:36.957775 kernel: Segment Routing with IPv6 Jul 2 00:27:36.957782 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:27:36.957790 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:27:36.957797 kernel: Key type dns_resolver registered Jul 2 00:27:36.957805 kernel: IPI shorthand broadcast: enabled Jul 2 00:27:36.957813 kernel: sched_clock: Marking stable (703002625, 106080347)->(862529176, -53446204) Jul 2 00:27:36.957820 kernel: registered taskstats version 1 Jul 2 00:27:36.957830 kernel: Loading compiled-in X.509 certificates Jul 2 00:27:36.957838 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:27:36.957854 kernel: Key type .fscrypt registered Jul 2 00:27:36.957862 kernel: Key type fscrypt-provisioning registered Jul 2 00:27:36.957869 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:27:36.957877 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:27:36.957885 kernel: ima: No architecture policies found Jul 2 00:27:36.957892 kernel: clk: Disabling unused clocks Jul 2 00:27:36.957900 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:27:36.957910 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:27:36.957917 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:27:36.957925 kernel: Run /init as init process Jul 2 00:27:36.957933 kernel: with arguments: Jul 2 00:27:36.957940 kernel: /init Jul 2 00:27:36.957948 kernel: with environment: Jul 2 00:27:36.957955 kernel: HOME=/ Jul 2 00:27:36.957979 kernel: TERM=linux Jul 2 00:27:36.957989 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:27:36.958001 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:27:36.958010 systemd[1]: Detected virtualization kvm. Jul 2 00:27:36.958019 systemd[1]: Detected architecture x86-64. Jul 2 00:27:36.958027 systemd[1]: Running in initrd. Jul 2 00:27:36.958035 systemd[1]: No hostname configured, using default hostname. Jul 2 00:27:36.958043 systemd[1]: Hostname set to . Jul 2 00:27:36.958054 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:27:36.958062 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:27:36.958071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:27:36.958079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:27:36.958088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:27:36.958097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:27:36.958105 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:27:36.958114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:27:36.958126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:27:36.958135 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:27:36.958143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:27:36.958152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:27:36.958160 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:27:36.958168 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:27:36.958176 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:27:36.958185 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:27:36.958195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:27:36.958204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:27:36.958212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:27:36.958220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:27:36.958229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:27:36.958283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:27:36.958292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:27:36.958300 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:27:36.958312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:27:36.958320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:27:36.958329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:27:36.958337 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:27:36.958345 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:27:36.958354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:27:36.958364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:27:36.958373 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:27:36.958381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:27:36.958389 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:27:36.958416 systemd-journald[192]: Collecting audit messages is disabled. Jul 2 00:27:36.958437 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:27:36.958446 systemd-journald[192]: Journal started Jul 2 00:27:36.958467 systemd-journald[192]: Runtime Journal (/run/log/journal/849d81250496458a9ad06446ce5b93e2) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:27:36.951113 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:27:37.001652 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:27:37.002648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:27:37.008465 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:27:37.005074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:27:37.012044 kernel: Bridge firewalling registered Jul 2 00:27:37.012044 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:27:37.014948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:27:37.031204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:27:37.033293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:27:37.034046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:27:37.036386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:27:37.047810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:27:37.052348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:27:37.062527 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:27:37.063953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:27:37.066152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:27:37.074992 dracut-cmdline[224]: dracut-dracut-053 Jul 2 00:27:37.083561 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:27:37.082670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:27:37.121590 systemd-resolved[237]: Positive Trust Anchors: Jul 2 00:27:37.121608 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:27:37.121638 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:27:37.124207 systemd-resolved[237]: Defaulting to hostname 'linux'. Jul 2 00:27:37.125357 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:27:37.131852 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:27:37.218286 kernel: SCSI subsystem initialized Jul 2 00:27:37.229268 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:27:37.242263 kernel: iscsi: registered transport (tcp) Jul 2 00:27:37.290291 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:27:37.290358 kernel: QLogic iSCSI HBA Driver Jul 2 00:27:37.340782 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:27:37.353409 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:27:37.384566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:27:37.384630 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:27:37.385604 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:27:37.433302 kernel: raid6: avx2x4 gen() 24701 MB/s Jul 2 00:27:37.450291 kernel: raid6: avx2x2 gen() 28258 MB/s Jul 2 00:27:37.471278 kernel: raid6: avx2x1 gen() 19461 MB/s Jul 2 00:27:37.471343 kernel: raid6: using algorithm avx2x2 gen() 28258 MB/s Jul 2 00:27:37.488469 kernel: raid6: .... xor() 14445 MB/s, rmw enabled Jul 2 00:27:37.488553 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:27:37.517278 kernel: xor: automatically using best checksumming function avx Jul 2 00:27:37.722298 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:27:37.737752 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:27:37.749375 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:27:37.762933 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 2 00:27:37.768451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:27:37.786864 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:27:37.810261 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 2 00:27:37.847428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:27:37.859376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:27:37.933217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:27:37.940676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:27:37.958383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:27:37.959662 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:27:37.963695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:27:37.965182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:27:37.971275 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:27:37.999763 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:27:37.999969 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:27:37.999987 kernel: GPT:9289727 != 19775487 Jul 2 00:27:38.000002 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:27:38.000017 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:27:38.000031 kernel: GPT:9289727 != 19775487 Jul 2 00:27:38.000045 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:27:38.000060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:27:37.978449 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:27:37.989614 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:27:38.015660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:27:38.015830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:27:38.022783 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:27:38.026770 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (473) Jul 2 00:27:38.023521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:27:38.023680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:27:38.047393 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (471) Jul 2 00:27:38.027264 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:27:38.051268 kernel: libata version 3.00 loaded. Jul 2 00:27:38.051515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:27:38.056055 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:27:38.056078 kernel: AES CTR mode by8 optimization enabled Jul 2 00:27:38.056091 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:27:38.063377 kernel: scsi host0: ata_piix Jul 2 00:27:38.063588 kernel: scsi host1: ata_piix Jul 2 00:27:38.063771 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:27:38.063797 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:27:38.058047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:27:38.078146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:27:38.117489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:27:38.127589 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:27:38.138167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:27:38.182497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:27:38.199413 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:27:38.239977 kernel: ata2: found unknown device (class 0) Jul 2 00:27:38.240005 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:27:38.241100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:27:38.245843 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:27:38.264771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:27:38.332351 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:27:38.346439 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:27:38.346456 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:27:38.360639 disk-uuid[544]: Primary Header is updated. Jul 2 00:27:38.360639 disk-uuid[544]: Secondary Entries is updated. Jul 2 00:27:38.360639 disk-uuid[544]: Secondary Header is updated. Jul 2 00:27:38.364412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:27:38.369272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:27:39.369775 disk-uuid[567]: The operation has completed successfully. Jul 2 00:27:39.371180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:27:39.395383 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:27:39.395504 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:27:39.428565 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:27:39.432032 sh[582]: Success Jul 2 00:27:39.446261 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:27:39.478331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:27:39.498800 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:27:39.503129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:27:39.512698 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:27:39.512735 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:27:39.512750 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:27:39.513721 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:27:39.514468 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:27:39.519305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:27:39.520226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:27:39.525377 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:27:39.526501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:27:39.542944 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:27:39.543002 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:27:39.543016 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:27:39.546689 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:27:39.556772 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:27:39.558602 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:27:39.570639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:27:39.578468 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:27:39.639206 ignition[688]: Ignition 2.18.0 Jul 2 00:27:39.639221 ignition[688]: Stage: fetch-offline Jul 2 00:27:39.639290 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:39.639304 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:39.639525 ignition[688]: parsed url from cmdline: "" Jul 2 00:27:39.639530 ignition[688]: no config URL provided Jul 2 00:27:39.639535 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:27:39.639545 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:27:39.639571 ignition[688]: op(1): [started] loading QEMU firmware config module Jul 2 00:27:39.639576 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:27:39.650130 ignition[688]: op(1): [finished] loading QEMU firmware config module Jul 2 00:27:39.651330 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:27:39.660397 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:27:39.686395 systemd-networkd[773]: lo: Link UP Jul 2 00:27:39.686407 systemd-networkd[773]: lo: Gained carrier Jul 2 00:27:39.688077 systemd-networkd[773]: Enumeration completed Jul 2 00:27:39.688499 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:27:39.688503 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:27:39.688525 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:27:39.689502 systemd-networkd[773]: eth0: Link UP Jul 2 00:27:39.689507 systemd-networkd[773]: eth0: Gained carrier Jul 2 00:27:39.689515 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:27:39.690488 systemd[1]: Reached target network.target - Network. Jul 2 00:27:39.706747 ignition[688]: parsing config with SHA512: 5012883d1d0b0a016b80475e469350ed2992e0a8df68f9df31c551f2645837a4cf37e65eff069315e31be5da97acf02e19679be35625b34f2eb6e40824b4443e Jul 2 00:27:39.710560 unknown[688]: fetched base config from "system" Jul 2 00:27:39.710572 unknown[688]: fetched user config from "qemu" Jul 2 00:27:39.710948 ignition[688]: fetch-offline: fetch-offline passed Jul 2 00:27:39.711330 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:27:39.710999 ignition[688]: Ignition finished successfully Jul 2 00:27:39.713484 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:27:39.716737 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:27:39.722460 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:27:39.740766 ignition[777]: Ignition 2.18.0 Jul 2 00:27:39.740787 ignition[777]: Stage: kargs Jul 2 00:27:39.740973 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:39.740984 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:39.742002 ignition[777]: kargs: kargs passed Jul 2 00:27:39.745598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:27:39.742052 ignition[777]: Ignition finished successfully Jul 2 00:27:39.758488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:27:39.772276 ignition[786]: Ignition 2.18.0 Jul 2 00:27:39.772286 ignition[786]: Stage: disks Jul 2 00:27:39.772470 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:39.772484 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:39.773611 ignition[786]: disks: disks passed Jul 2 00:27:39.775887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:27:39.773665 ignition[786]: Ignition finished successfully Jul 2 00:27:39.777275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:27:39.778925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:27:39.779578 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:27:39.779929 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:27:39.780476 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:27:39.792438 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:27:39.806408 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:27:39.813379 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:27:39.827491 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:27:39.927277 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:27:39.927841 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:27:39.930093 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:27:39.946363 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:27:39.949398 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:27:39.952058 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:27:39.952119 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:27:39.961248 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Jul 2 00:27:39.961278 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:27:39.961290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:27:39.961300 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:27:39.954059 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:27:39.963647 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:27:39.965080 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:27:39.967048 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:27:39.970877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:27:40.012658 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:27:40.018957 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:27:40.024423 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:27:40.030312 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:27:40.122712 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:27:40.134347 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:27:40.137089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:27:40.145277 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:27:40.165586 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:27:40.168600 ignition[922]: INFO : Ignition 2.18.0 Jul 2 00:27:40.168600 ignition[922]: INFO : Stage: mount Jul 2 00:27:40.170311 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:40.170311 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:40.173119 ignition[922]: INFO : mount: mount passed Jul 2 00:27:40.173877 ignition[922]: INFO : Ignition finished successfully Jul 2 00:27:40.176577 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:27:40.185414 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:27:40.512155 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:27:40.525526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:27:40.533874 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Jul 2 00:27:40.533914 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:27:40.533926 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:27:40.535424 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:27:40.538262 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:27:40.540026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:27:40.569607 ignition[951]: INFO : Ignition 2.18.0 Jul 2 00:27:40.569607 ignition[951]: INFO : Stage: files Jul 2 00:27:40.571257 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:40.571257 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:40.571257 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:27:40.575143 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:27:40.575143 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:27:40.579742 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:27:40.581186 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:27:40.582750 unknown[951]: wrote ssh authorized keys file for user: core Jul 2 00:27:40.583853 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:27:40.585440 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:27:40.587573 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:27:40.613125 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:27:40.666942 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:27:40.666942 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:27:40.671425 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 00:27:41.142693 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:27:41.241953 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:27:41.241953 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:27:41.269633 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 00:27:41.514174 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:27:41.768409 systemd-networkd[773]: eth0: Gained IPv6LL Jul 2 00:27:41.832992 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 00:27:41.832992 ignition[951]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 00:27:41.837153 ignition[951]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:27:41.859484 ignition[951]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:27:41.865098 ignition[951]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:27:41.866851 ignition[951]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:27:41.866851 ignition[951]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:27:41.866851 ignition[951]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:27:41.866851 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:27:41.866851 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:27:41.866851 ignition[951]: INFO : files: files passed Jul 2 00:27:41.866851 ignition[951]: INFO : Ignition finished successfully Jul 2 00:27:41.878919 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:27:41.891429 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:27:41.893446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:27:41.895406 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:27:41.895540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:27:41.903911 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:27:41.906398 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:27:41.906398 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:27:41.910862 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:27:41.912948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:27:41.914565 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:27:41.923474 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:27:41.949402 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:27:41.949553 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:27:41.951991 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:27:41.954084 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:27:41.956162 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:27:41.965545 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:27:41.982578 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:27:41.997468 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:27:42.008653 systemd[1]: Stopped target network.target - Network. Jul 2 00:27:42.009786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:27:42.011810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:27:42.014208 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:27:42.016300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:27:42.016470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:27:42.018708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:27:42.020499 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:27:42.022649 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:27:42.025009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:27:42.027196 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:27:42.029482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:27:42.031897 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:27:42.034392 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:27:42.036471 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:27:42.038744 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:27:42.040619 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:27:42.040822 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:27:42.043060 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:27:42.044797 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:27:42.046958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:27:42.047095 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:27:42.049281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:27:42.049442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:27:42.051652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:27:42.051810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:27:42.053975 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:27:42.055858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:27:42.056050 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:27:42.058526 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:27:42.060484 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:27:42.062438 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:27:42.062534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:27:42.064584 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:27:42.064676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:27:42.066858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:27:42.066993 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:27:42.068995 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:27:42.069117 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:27:42.080591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:27:42.082435 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:27:42.082612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:27:42.086542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:27:42.096600 ignition[1006]: INFO : Ignition 2.18.0 Jul 2 00:27:42.096600 ignition[1006]: INFO : Stage: umount Jul 2 00:27:42.088707 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:27:42.102659 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:27:42.102659 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:27:42.102659 ignition[1006]: INFO : umount: umount passed Jul 2 00:27:42.102659 ignition[1006]: INFO : Ignition finished successfully Jul 2 00:27:42.091554 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:27:42.093844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:27:42.094140 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:27:42.094399 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 2 00:27:42.097277 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:27:42.097446 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:27:42.104899 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:27:42.105076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:27:42.109218 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:27:42.109408 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:27:42.135140 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:27:42.135328 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:27:42.138149 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:27:42.140929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:27:42.141084 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:27:42.145650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:27:42.145706 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:27:42.148033 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:27:42.148101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:27:42.150171 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:27:42.150222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:27:42.152165 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:27:42.152217 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:27:42.154200 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:27:42.154280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:27:42.168433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:27:42.170460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:27:42.170538 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:27:42.171935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:27:42.171998 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:27:42.173981 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:27:42.174029 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:27:42.176546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:27:42.176607 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:27:42.233998 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:27:42.244511 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:27:42.244668 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:27:42.262174 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:27:42.262388 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:27:42.304038 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:27:42.304101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:27:42.306300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:27:42.306343 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:27:42.307727 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:27:42.307784 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:27:42.308426 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:27:42.308472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:27:42.308902 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:27:42.308948 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:27:42.324536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:27:42.326969 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:27:42.327052 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:27:42.329455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:27:42.329507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:27:42.332410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:27:42.332524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:27:42.613226 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:27:42.613451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:27:42.618366 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:27:42.619913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:27:42.619995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:27:42.636537 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:27:42.646873 systemd[1]: Switching root. Jul 2 00:27:42.677921 systemd-journald[192]: Journal stopped Jul 2 00:27:44.232281 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 2 00:27:44.232348 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:27:44.232367 kernel: SELinux: policy capability open_perms=1 Jul 2 00:27:44.232390 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:27:44.232405 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:27:44.232420 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:27:44.232436 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:27:44.232451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:27:44.232466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:27:44.232481 kernel: audit: type=1403 audit(1719880063.411:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:27:44.232502 systemd[1]: Successfully loaded SELinux policy in 43.903ms. Jul 2 00:27:44.232558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.535ms. Jul 2 00:27:44.232581 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:27:44.232603 systemd[1]: Detected virtualization kvm. Jul 2 00:27:44.232629 systemd[1]: Detected architecture x86-64. Jul 2 00:27:44.232654 systemd[1]: Detected first boot. Jul 2 00:27:44.232684 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:27:44.232708 zram_generator::config[1052]: No configuration found. Jul 2 00:27:44.232735 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:27:44.232758 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:27:44.232788 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:27:44.232816 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:27:44.232840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:27:44.232864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:27:44.232888 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:27:44.232914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:27:44.232938 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:27:44.232962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:27:44.232988 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:27:44.233009 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:27:44.233030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:27:44.233046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:27:44.233062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:27:44.233078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:27:44.233094 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:27:44.233110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:27:44.233126 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:27:44.233141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:27:44.233159 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:27:44.233174 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:27:44.233190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:27:44.233213 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:27:44.233231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:27:44.233289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:27:44.233305 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:27:44.233325 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:27:44.233343 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:27:44.233359 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:27:44.233376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:27:44.233393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:27:44.233409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:27:44.233425 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:27:44.233442 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:27:44.233458 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:27:44.233475 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:27:44.233495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:44.233511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:27:44.233528 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:27:44.233544 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:27:44.233561 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:27:44.233578 systemd[1]: Reached target machines.target - Containers. Jul 2 00:27:44.233594 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:27:44.233610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:27:44.233635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:27:44.233652 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:27:44.233669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:27:44.233695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:27:44.233712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:27:44.233729 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:27:44.233745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:27:44.233762 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:27:44.233782 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:27:44.233799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:27:44.233815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:27:44.233833 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:27:44.233849 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:27:44.233864 kernel: loop: module loaded Jul 2 00:27:44.233901 systemd-journald[1114]: Collecting audit messages is disabled. Jul 2 00:27:44.233929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:27:44.233950 kernel: fuse: init (API version 7.39) Jul 2 00:27:44.233965 systemd-journald[1114]: Journal started Jul 2 00:27:44.233993 systemd-journald[1114]: Runtime Journal (/run/log/journal/849d81250496458a9ad06446ce5b93e2) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:27:43.942378 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:27:43.959085 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:27:43.959623 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:27:44.238265 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:27:44.244357 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:27:44.250624 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:27:44.252956 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:27:44.252995 systemd[1]: Stopped verity-setup.service. Jul 2 00:27:44.260921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:44.263261 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:27:44.265281 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:27:44.266702 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:27:44.291530 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:27:44.293271 kernel: ACPI: bus type drm_connector registered Jul 2 00:27:44.293668 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:27:44.295258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:27:44.297798 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:27:44.299489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:27:44.301382 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:27:44.301649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:27:44.303487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:27:44.303658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:27:44.305360 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:27:44.305531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:27:44.333942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:27:44.334121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:27:44.337646 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:27:44.337833 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:27:44.339569 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:27:44.339754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:27:44.341359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:27:44.343120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:27:44.344749 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:27:44.355078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:27:44.371681 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:27:44.384412 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:27:44.388820 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:27:44.390009 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:27:44.390044 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:27:44.401912 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:27:44.404366 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:27:44.406881 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:27:44.408099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:27:44.418129 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:27:44.421079 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:27:44.422785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:27:44.424605 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:27:44.426051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:27:44.428120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:27:44.432379 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:27:44.435142 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:27:44.441329 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:27:44.443086 systemd-journald[1114]: Time spent on flushing to /var/log/journal/849d81250496458a9ad06446ce5b93e2 is 25.878ms for 950 entries. Jul 2 00:27:44.443086 systemd-journald[1114]: System Journal (/var/log/journal/849d81250496458a9ad06446ce5b93e2) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:27:44.551089 systemd-journald[1114]: Received client request to flush runtime journal. Jul 2 00:27:44.551123 kernel: loop0: detected capacity change from 0 to 80568 Jul 2 00:27:44.551137 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:27:44.551217 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:27:44.551251 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 00:27:44.444286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:27:44.445836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:27:44.460839 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:27:44.480406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:27:44.528352 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:27:44.546248 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:27:44.554732 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:27:44.557831 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:27:44.560382 kernel: loop2: detected capacity change from 0 to 139904 Jul 2 00:27:44.560221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:27:44.568441 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:27:44.585746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:27:44.588576 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:27:44.626748 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 2 00:27:44.626768 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 2 00:27:44.633711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:27:44.635556 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:27:44.654262 kernel: loop4: detected capacity change from 0 to 210664 Jul 2 00:27:44.663289 kernel: loop5: detected capacity change from 0 to 139904 Jul 2 00:27:44.670260 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:27:44.671005 (sd-merge)[1190]: Merged extensions into '/usr'. Jul 2 00:27:44.722938 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:27:44.722960 systemd[1]: Reloading... Jul 2 00:27:44.781268 zram_generator::config[1219]: No configuration found. Jul 2 00:27:44.906621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:27:44.909418 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:27:44.965120 systemd[1]: Reloading finished in 241 ms. Jul 2 00:27:45.005887 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:27:45.007364 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:27:45.015395 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:27:45.017425 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:27:45.033488 systemd[1]: Starting ensure-sysext.service... Jul 2 00:27:45.045220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:27:45.053424 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:27:45.053441 systemd[1]: Reloading... Jul 2 00:27:45.074400 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:27:45.074768 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:27:45.075909 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:27:45.076328 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 2 00:27:45.076423 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 2 00:27:45.080554 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:27:45.080569 systemd-tmpfiles[1255]: Skipping /boot Jul 2 00:27:45.093638 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:27:45.093668 systemd-tmpfiles[1255]: Skipping /boot Jul 2 00:27:45.123371 zram_generator::config[1283]: No configuration found. Jul 2 00:27:45.231019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:27:45.286915 systemd[1]: Reloading finished in 233 ms. Jul 2 00:27:45.309153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:27:45.317139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:27:45.328408 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:27:45.331628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:27:45.334482 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:27:45.339339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:27:45.343784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:27:45.348370 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:27:45.353456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.353638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:27:45.363151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:27:45.366310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:27:45.372556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:27:45.374488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:27:45.386664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:27:45.388021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.389904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:27:45.392147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:27:45.392508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:27:45.394584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:27:45.394926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:27:45.397914 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:27:45.398290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:27:45.399840 augenrules[1344]: No rules Jul 2 00:27:45.400221 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:27:45.409345 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 2 00:27:45.412674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:27:45.412969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:27:45.422264 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:27:45.426149 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:27:45.431910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.432208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:27:45.445600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:27:45.450614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:27:45.454531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:27:45.455928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:27:45.456066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.457928 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:27:45.459527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:27:45.461948 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:27:45.466293 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:27:45.474129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:27:45.475324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:27:45.477392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:27:45.478163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:27:45.489064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:27:45.489364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:27:45.501277 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Jul 2 00:27:45.502290 systemd[1]: Finished ensure-sysext.service. Jul 2 00:27:45.509983 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:27:45.510598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.510812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:27:45.520401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:27:45.522817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:27:45.525063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:27:45.529394 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:27:45.530544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:27:45.532416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:27:45.537476 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:27:45.540318 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:27:45.540344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:27:45.541018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:27:45.541190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:27:45.542958 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:27:45.543137 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:27:45.544732 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:27:45.544918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:27:45.547202 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:27:45.547433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:27:45.552884 systemd-resolved[1323]: Positive Trust Anchors: Jul 2 00:27:45.553254 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:27:45.553354 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:27:45.568265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1379) Jul 2 00:27:45.575381 systemd-resolved[1323]: Defaulting to hostname 'linux'. Jul 2 00:27:45.577093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:27:45.577150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:27:45.580312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:27:45.582228 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:27:45.589293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:27:45.599912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:27:45.615779 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:27:45.615193 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:27:45.620483 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:27:45.634438 systemd-networkd[1398]: lo: Link UP Jul 2 00:27:45.634450 systemd-networkd[1398]: lo: Gained carrier Jul 2 00:27:45.636147 systemd-networkd[1398]: Enumeration completed Jul 2 00:27:45.636287 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:27:45.636597 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:27:45.636602 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:27:45.637438 systemd-networkd[1398]: eth0: Link UP Jul 2 00:27:45.637448 systemd-networkd[1398]: eth0: Gained carrier Jul 2 00:27:45.637460 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:27:45.638015 systemd[1]: Reached target network.target - Network. Jul 2 00:27:45.646302 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:27:45.648510 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:27:45.650349 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:27:45.662285 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:27:45.664795 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:27:45.668790 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:27:45.668964 systemd-timesyncd[1399]: Initial clock synchronization to Tue 2024-07-02 00:27:46.024207 UTC. Jul 2 00:27:45.673270 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 00:27:45.701266 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:27:45.716531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:27:45.802291 kernel: kvm_amd: TSC scaling supported Jul 2 00:27:45.802513 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:27:45.802550 kernel: kvm_amd: Nested Paging enabled Jul 2 00:27:45.802581 kernel: kvm_amd: LBR virtualization supported Jul 2 00:27:45.802618 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:27:45.802648 kernel: kvm_amd: Virtual GIF supported Jul 2 00:27:45.817623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:27:45.827288 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:27:45.859856 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:27:45.876608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:27:45.887321 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:27:45.915805 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:27:45.918252 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:27:45.919434 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:27:45.920676 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:27:45.921983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:27:45.923498 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:27:45.924758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:27:45.926058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:27:45.927349 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:27:45.927384 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:27:45.928297 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:27:45.930073 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:27:45.932966 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:27:45.950793 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:27:45.953845 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:27:45.955834 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:27:45.957105 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:27:45.958133 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:27:45.958614 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:27:45.958654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:27:45.960076 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:27:45.962349 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:27:45.966254 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:27:45.966614 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:27:45.971611 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:27:45.972923 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:27:45.974663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:27:45.978151 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:27:45.983435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:27:45.986404 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:27:45.987121 jq[1430]: false Jul 2 00:27:45.999452 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:27:46.000086 extend-filesystems[1431]: Found loop3 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found loop4 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found loop5 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found sr0 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda1 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda2 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda3 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found usr Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda4 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda6 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda7 Jul 2 00:27:46.001948 extend-filesystems[1431]: Found vda9 Jul 2 00:27:46.001948 extend-filesystems[1431]: Checking size of /dev/vda9 Jul 2 00:27:46.001227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:27:46.010462 dbus-daemon[1429]: [system] SELinux support is enabled Jul 2 00:27:46.030491 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:27:46.030556 extend-filesystems[1431]: Resized partition /dev/vda9 Jul 2 00:27:46.006352 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:27:46.034321 extend-filesystems[1451]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:27:46.010633 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:27:46.039337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1381) Jul 2 00:27:46.022937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:27:46.025487 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:27:46.039540 jq[1449]: true Jul 2 00:27:46.034121 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:27:46.053826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:27:46.055764 update_engine[1445]: I0702 00:27:46.054044 1445 main.cc:92] Flatcar Update Engine starting Jul 2 00:27:46.055764 update_engine[1445]: I0702 00:27:46.055491 1445 update_check_scheduler.cc:74] Next update check in 4m38s Jul 2 00:27:46.054134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:27:46.054542 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:27:46.054789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:27:46.057352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:27:46.058056 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:27:46.075213 jq[1456]: true Jul 2 00:27:46.077439 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:27:46.082967 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:27:46.082992 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:27:46.084325 systemd-logind[1439]: New seat seat0. Jul 2 00:27:46.088894 tar[1455]: linux-amd64/helm Jul 2 00:27:46.093390 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:27:46.097350 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:27:46.097836 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:27:46.104811 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:27:46.104983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:27:46.107761 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:27:46.107898 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:27:46.116530 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:27:46.127542 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:27:46.127542 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:27:46.127542 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:27:46.132040 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jul 2 00:27:46.133807 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:27:46.134092 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:27:46.155048 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:27:46.156234 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:27:46.157528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:27:46.159954 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:27:46.235181 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:27:46.258890 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:27:46.267817 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:27:46.277711 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:27:46.278101 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:27:46.297927 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:27:46.309216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:27:46.320657 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:27:46.323689 containerd[1459]: time="2024-07-02T00:27:46.323599122Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:27:46.324076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:27:46.325711 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:27:46.357017 containerd[1459]: time="2024-07-02T00:27:46.356949598Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:27:46.357017 containerd[1459]: time="2024-07-02T00:27:46.357020305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359000 containerd[1459]: time="2024-07-02T00:27:46.358919226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359045 containerd[1459]: time="2024-07-02T00:27:46.359005858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359460 containerd[1459]: time="2024-07-02T00:27:46.359414298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359494 containerd[1459]: time="2024-07-02T00:27:46.359458478Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:27:46.359651 containerd[1459]: time="2024-07-02T00:27:46.359625675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359737 containerd[1459]: time="2024-07-02T00:27:46.359709117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:27:46.359764 containerd[1459]: time="2024-07-02T00:27:46.359735832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.360533 containerd[1459]: time="2024-07-02T00:27:46.359834217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.360533 containerd[1459]: time="2024-07-02T00:27:46.360328997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.360533 containerd[1459]: time="2024-07-02T00:27:46.360352416Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:27:46.360533 containerd[1459]: time="2024-07-02T00:27:46.360363529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:27:46.360718 containerd[1459]: time="2024-07-02T00:27:46.360544036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:27:46.360718 containerd[1459]: time="2024-07-02T00:27:46.360566712Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:27:46.360718 containerd[1459]: time="2024-07-02T00:27:46.360645036Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:27:46.360718 containerd[1459]: time="2024-07-02T00:27:46.360662156Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367130447Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367163336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367176531Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367210247Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367225023Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367236743Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:27:46.367302 containerd[1459]: time="2024-07-02T00:27:46.367258519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:27:46.367598 containerd[1459]: time="2024-07-02T00:27:46.367582198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:27:46.367655 containerd[1459]: time="2024-07-02T00:27:46.367643194Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:27:46.367713 containerd[1459]: time="2024-07-02T00:27:46.367700401Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367749228Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367772375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367793177Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367810841Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367828954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367843866Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367858338Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367870979Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.367882991Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:27:46.368044 containerd[1459]: time="2024-07-02T00:27:46.368001373Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:27:46.368522 containerd[1459]: time="2024-07-02T00:27:46.368504210Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:27:46.368605 containerd[1459]: time="2024-07-02T00:27:46.368591126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.368664 containerd[1459]: time="2024-07-02T00:27:46.368651704Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:27:46.368728 containerd[1459]: time="2024-07-02T00:27:46.368716090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:27:46.368823 containerd[1459]: time="2024-07-02T00:27:46.368810310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.368908 containerd[1459]: time="2024-07-02T00:27:46.368889263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.368961 containerd[1459]: time="2024-07-02T00:27:46.368949422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.369006 containerd[1459]: time="2024-07-02T00:27:46.368995998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.369053 containerd[1459]: time="2024-07-02T00:27:46.369042407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369088721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369103235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369114955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369129197Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369309202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369326270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369346517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369359305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369372688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369386345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369399854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370185 containerd[1459]: time="2024-07-02T00:27:46.369413008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:27:46.370465 containerd[1459]: time="2024-07-02T00:27:46.369691398Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:27:46.370465 containerd[1459]: time="2024-07-02T00:27:46.369741657Z" level=info msg="Connect containerd service" Jul 2 00:27:46.370465 containerd[1459]: time="2024-07-02T00:27:46.369765118Z" level=info msg="using legacy CRI server" Jul 2 00:27:46.370465 containerd[1459]: time="2024-07-02T00:27:46.369772244Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:27:46.370465 containerd[1459]: time="2024-07-02T00:27:46.369861671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:27:46.370971 containerd[1459]: time="2024-07-02T00:27:46.370949249Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:27:46.371089 containerd[1459]: time="2024-07-02T00:27:46.371069902Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:27:46.371215 containerd[1459]: time="2024-07-02T00:27:46.371198685Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:27:46.371283 containerd[1459]: time="2024-07-02T00:27:46.371270124Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:27:46.371351 containerd[1459]: time="2024-07-02T00:27:46.371337492Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:27:46.371473 containerd[1459]: time="2024-07-02T00:27:46.371165325Z" level=info msg="Start subscribing containerd event" Jul 2 00:27:46.371521 containerd[1459]: time="2024-07-02T00:27:46.371494194Z" level=info msg="Start recovering state" Jul 2 00:27:46.371599 containerd[1459]: time="2024-07-02T00:27:46.371576840Z" level=info msg="Start event monitor" Jul 2 00:27:46.371599 containerd[1459]: time="2024-07-02T00:27:46.371595822Z" level=info msg="Start snapshots syncer" Jul 2 00:27:46.371666 containerd[1459]: time="2024-07-02T00:27:46.371608620Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:27:46.371666 containerd[1459]: time="2024-07-02T00:27:46.371616845Z" level=info msg="Start streaming server" Jul 2 00:27:46.371947 containerd[1459]: time="2024-07-02T00:27:46.371930447Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:27:46.372112 containerd[1459]: time="2024-07-02T00:27:46.372039956Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:27:46.372345 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:27:46.373696 containerd[1459]: time="2024-07-02T00:27:46.372789340Z" level=info msg="containerd successfully booted in 0.052455s" Jul 2 00:27:46.474917 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:27:46.477678 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:49730.service - OpenSSH per-connection server daemon (10.0.0.1:49730). Jul 2 00:27:46.521724 tar[1455]: linux-amd64/LICENSE Jul 2 00:27:46.521809 tar[1455]: linux-amd64/README.md Jul 2 00:27:46.523978 sshd[1518]: Accepted publickey for core from 10.0.0.1 port 49730 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:46.526516 sshd[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:46.540568 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:27:46.545096 systemd-logind[1439]: New session 1 of user core. Jul 2 00:27:46.546609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:27:46.557674 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:27:46.571949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:27:46.576366 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:27:46.585383 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:46.705198 systemd[1525]: Queued start job for default target default.target. Jul 2 00:27:46.720664 systemd[1525]: Created slice app.slice - User Application Slice. Jul 2 00:27:46.720692 systemd[1525]: Reached target paths.target - Paths. Jul 2 00:27:46.720706 systemd[1525]: Reached target timers.target - Timers. Jul 2 00:27:46.722453 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:27:46.737453 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:27:46.737625 systemd[1525]: Reached target sockets.target - Sockets. Jul 2 00:27:46.737648 systemd[1525]: Reached target basic.target - Basic System. Jul 2 00:27:46.737704 systemd[1525]: Reached target default.target - Main User Target. Jul 2 00:27:46.737745 systemd[1525]: Startup finished in 145ms. Jul 2 00:27:46.737991 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:27:46.740674 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:27:46.804609 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:49740.service - OpenSSH per-connection server daemon (10.0.0.1:49740). Jul 2 00:27:46.824396 systemd-networkd[1398]: eth0: Gained IPv6LL Jul 2 00:27:46.828132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:27:46.830266 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:27:46.842521 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:27:46.844903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:27:46.847338 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:27:46.864232 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 49740 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:46.865880 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:46.872542 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:27:46.879492 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:27:46.879755 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:27:46.882995 systemd-logind[1439]: New session 2 of user core. Jul 2 00:27:46.884268 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:27:46.885669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:27:46.942660 sshd[1536]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:46.949979 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:49740.service: Deactivated successfully. Jul 2 00:27:46.951732 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:27:46.953292 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:27:46.961692 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:49744.service - OpenSSH per-connection server daemon (10.0.0.1:49744). Jul 2 00:27:46.964070 systemd-logind[1439]: Removed session 2. Jul 2 00:27:46.991218 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 49744 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:46.992792 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:46.997302 systemd-logind[1439]: New session 3 of user core. Jul 2 00:27:47.003400 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:27:47.061171 sshd[1560]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:47.065766 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:49744.service: Deactivated successfully. Jul 2 00:27:47.067811 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:27:47.068582 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:27:47.069481 systemd-logind[1439]: Removed session 3. Jul 2 00:27:47.562962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:27:47.564719 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:27:47.565942 systemd[1]: Startup finished in 872ms (kernel) + 6.672s (initrd) + 4.196s (userspace) = 11.741s. Jul 2 00:27:47.591877 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:27:48.061571 kubelet[1571]: E0702 00:27:48.061350 1571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:27:48.066160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:27:48.066391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:27:48.066772 systemd[1]: kubelet.service: Consumed 1.004s CPU time. Jul 2 00:27:57.297099 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:44284.service - OpenSSH per-connection server daemon (10.0.0.1:44284). Jul 2 00:27:57.331904 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 44284 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:57.333562 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:57.337738 systemd-logind[1439]: New session 4 of user core. Jul 2 00:27:57.348378 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:27:57.401925 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:57.413061 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:44284.service: Deactivated successfully. Jul 2 00:27:57.414718 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:27:57.416377 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:27:57.417693 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:44298.service - OpenSSH per-connection server daemon (10.0.0.1:44298). Jul 2 00:27:57.418400 systemd-logind[1439]: Removed session 4. Jul 2 00:27:57.453106 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 44298 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:57.454684 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:57.458723 systemd-logind[1439]: New session 5 of user core. Jul 2 00:27:57.468382 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:27:57.518592 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:57.525950 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:44298.service: Deactivated successfully. Jul 2 00:27:57.527622 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:27:57.529055 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:27:57.536504 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). Jul 2 00:27:57.537496 systemd-logind[1439]: Removed session 5. Jul 2 00:27:57.566547 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:57.568012 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:57.572367 systemd-logind[1439]: New session 6 of user core. Jul 2 00:27:57.582366 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:27:57.638172 sshd[1599]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:57.661162 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:44304.service: Deactivated successfully. Jul 2 00:27:57.663061 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:27:57.664778 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:27:57.681512 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:44312.service - OpenSSH per-connection server daemon (10.0.0.1:44312). Jul 2 00:27:57.682414 systemd-logind[1439]: Removed session 6. Jul 2 00:27:57.711342 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 44312 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:57.712859 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:57.716744 systemd-logind[1439]: New session 7 of user core. Jul 2 00:27:57.728418 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:27:57.786788 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:27:57.787114 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:27:57.802961 sudo[1609]: pam_unix(sudo:session): session closed for user root Jul 2 00:27:57.804984 sshd[1606]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:57.817156 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:44312.service: Deactivated successfully. Jul 2 00:27:57.819258 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:27:57.820879 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:27:57.822216 systemd[1]: Started sshd@7-10.0.0.160:22-10.0.0.1:44318.service - OpenSSH per-connection server daemon (10.0.0.1:44318). Jul 2 00:27:57.823022 systemd-logind[1439]: Removed session 7. Jul 2 00:27:57.856308 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 44318 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:57.857808 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:57.861736 systemd-logind[1439]: New session 8 of user core. Jul 2 00:27:57.871365 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:27:57.926884 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:27:57.927177 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:27:57.931152 sudo[1618]: pam_unix(sudo:session): session closed for user root Jul 2 00:27:57.937598 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:27:57.937906 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:27:57.955476 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:27:57.957296 auditctl[1621]: No rules Jul 2 00:27:57.958655 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:27:57.958912 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:27:57.960650 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:27:57.991023 augenrules[1639]: No rules Jul 2 00:27:57.992815 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:27:57.994340 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 2 00:27:57.996019 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:58.006333 systemd[1]: sshd@7-10.0.0.160:22-10.0.0.1:44318.service: Deactivated successfully. Jul 2 00:27:58.008164 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:27:58.009897 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:27:58.020643 systemd[1]: Started sshd@8-10.0.0.160:22-10.0.0.1:43366.service - OpenSSH per-connection server daemon (10.0.0.1:43366). Jul 2 00:27:58.021674 systemd-logind[1439]: Removed session 8. Jul 2 00:27:58.050331 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 43366 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:27:58.052184 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:58.056209 systemd-logind[1439]: New session 9 of user core. Jul 2 00:27:58.065365 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:27:58.066869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:27:58.068670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:27:58.120982 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:27:58.121301 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:27:58.239506 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:27:58.239702 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:27:58.261603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:27:58.266781 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:27:58.330022 kubelet[1670]: E0702 00:27:58.329877 1670 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:27:58.337047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:27:58.337325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:27:58.496485 dockerd[1664]: time="2024-07-02T00:27:58.496416872Z" level=info msg="Starting up" Jul 2 00:28:00.577012 dockerd[1664]: time="2024-07-02T00:28:00.576908885Z" level=info msg="Loading containers: start." Jul 2 00:28:01.290282 kernel: Initializing XFRM netlink socket Jul 2 00:28:01.370973 systemd-networkd[1398]: docker0: Link UP Jul 2 00:28:01.777956 dockerd[1664]: time="2024-07-02T00:28:01.777826630Z" level=info msg="Loading containers: done." Jul 2 00:28:01.825865 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2042764065-merged.mount: Deactivated successfully. Jul 2 00:28:02.100352 dockerd[1664]: time="2024-07-02T00:28:02.100176743Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:28:02.100543 dockerd[1664]: time="2024-07-02T00:28:02.100428537Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:28:02.100581 dockerd[1664]: time="2024-07-02T00:28:02.100553573Z" level=info msg="Daemon has completed initialization" Jul 2 00:28:02.787179 dockerd[1664]: time="2024-07-02T00:28:02.787094867Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:28:02.787346 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:28:03.313161 containerd[1459]: time="2024-07-02T00:28:03.313118040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:28:03.985738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690324208.mount: Deactivated successfully. Jul 2 00:28:05.645464 containerd[1459]: time="2024-07-02T00:28:05.645395405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:05.646904 containerd[1459]: time="2024-07-02T00:28:05.646852747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 00:28:05.648223 containerd[1459]: time="2024-07-02T00:28:05.648181195Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:05.652866 containerd[1459]: time="2024-07-02T00:28:05.652803413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:05.655866 containerd[1459]: time="2024-07-02T00:28:05.654271512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 2.341107524s" Jul 2 00:28:05.655866 containerd[1459]: time="2024-07-02T00:28:05.654309778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 00:28:05.677766 containerd[1459]: time="2024-07-02T00:28:05.677725637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:28:08.403425 containerd[1459]: time="2024-07-02T00:28:08.403355805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:08.413838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:28:08.421538 containerd[1459]: time="2024-07-02T00:28:08.421429019Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 00:28:08.424039 containerd[1459]: time="2024-07-02T00:28:08.423972638Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:08.424487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:08.427925 containerd[1459]: time="2024-07-02T00:28:08.427838399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:08.429148 containerd[1459]: time="2024-07-02T00:28:08.429101090Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.75133147s" Jul 2 00:28:08.429148 containerd[1459]: time="2024-07-02T00:28:08.429139502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 00:28:08.459517 containerd[1459]: time="2024-07-02T00:28:08.459470558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:28:08.575650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:08.580977 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:28:08.621845 kubelet[1898]: E0702 00:28:08.621742 1898 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:28:08.625729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:28:08.625971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:28:10.583426 containerd[1459]: time="2024-07-02T00:28:10.583365275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:10.584747 containerd[1459]: time="2024-07-02T00:28:10.584701062Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 00:28:10.586359 containerd[1459]: time="2024-07-02T00:28:10.586322467Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:10.590885 containerd[1459]: time="2024-07-02T00:28:10.590842680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:10.592142 containerd[1459]: time="2024-07-02T00:28:10.592113618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 2.132595606s" Jul 2 00:28:10.592142 containerd[1459]: time="2024-07-02T00:28:10.592144512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 00:28:10.614961 containerd[1459]: time="2024-07-02T00:28:10.614915632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:28:11.812949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259093620.mount: Deactivated successfully. Jul 2 00:28:12.068975 containerd[1459]: time="2024-07-02T00:28:12.068855096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:12.069826 containerd[1459]: time="2024-07-02T00:28:12.069787178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 00:28:12.070866 containerd[1459]: time="2024-07-02T00:28:12.070820208Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:12.072993 containerd[1459]: time="2024-07-02T00:28:12.072962014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:12.073602 containerd[1459]: time="2024-07-02T00:28:12.073566430Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 1.458602606s" Jul 2 00:28:12.073636 containerd[1459]: time="2024-07-02T00:28:12.073601745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 00:28:12.094035 containerd[1459]: time="2024-07-02T00:28:12.093968257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:28:12.944641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984461469.mount: Deactivated successfully. Jul 2 00:28:14.253721 containerd[1459]: time="2024-07-02T00:28:14.253649797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.254614 containerd[1459]: time="2024-07-02T00:28:14.254569332Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 00:28:14.256015 containerd[1459]: time="2024-07-02T00:28:14.255953197Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.259063 containerd[1459]: time="2024-07-02T00:28:14.259008279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.260249 containerd[1459]: time="2024-07-02T00:28:14.260208847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.166205449s" Jul 2 00:28:14.260297 containerd[1459]: time="2024-07-02T00:28:14.260250669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 00:28:14.282750 containerd[1459]: time="2024-07-02T00:28:14.282703331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:28:14.763904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166101349.mount: Deactivated successfully. Jul 2 00:28:14.769927 containerd[1459]: time="2024-07-02T00:28:14.769879775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.770769 containerd[1459]: time="2024-07-02T00:28:14.770720799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:28:14.771768 containerd[1459]: time="2024-07-02T00:28:14.771721612Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.773871 containerd[1459]: time="2024-07-02T00:28:14.773824022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:14.774570 containerd[1459]: time="2024-07-02T00:28:14.774522148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 491.777797ms" Jul 2 00:28:14.774570 containerd[1459]: time="2024-07-02T00:28:14.774552336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:28:14.797146 containerd[1459]: time="2024-07-02T00:28:14.797105152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:28:15.371723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492303989.mount: Deactivated successfully. Jul 2 00:28:17.952869 containerd[1459]: time="2024-07-02T00:28:17.952807594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:17.953522 containerd[1459]: time="2024-07-02T00:28:17.953453046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 00:28:17.954862 containerd[1459]: time="2024-07-02T00:28:17.954818390Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:17.958189 containerd[1459]: time="2024-07-02T00:28:17.958149130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:28:17.959399 containerd[1459]: time="2024-07-02T00:28:17.959360100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.162209488s" Jul 2 00:28:17.959399 containerd[1459]: time="2024-07-02T00:28:17.959393810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 00:28:18.663892 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:28:18.674654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:18.836135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:18.858941 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:28:18.911171 kubelet[2119]: E0702 00:28:18.911100 2119 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:28:18.915519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:28:18.915728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:28:20.558257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:20.573509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:20.592444 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit session-9.scope)... Jul 2 00:28:20.592464 systemd[1]: Reloading... Jul 2 00:28:20.673278 zram_generator::config[2172]: No configuration found. Jul 2 00:28:21.243538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:21.346068 systemd[1]: Reloading finished in 753 ms. Jul 2 00:28:21.407016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:21.411294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:21.412122 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:28:21.412540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:21.425664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:21.575818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:21.581093 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:28:21.616673 kubelet[2222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:21.616673 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:28:21.616673 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:21.617963 kubelet[2222]: I0702 00:28:21.617915 2222 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:28:21.932108 kubelet[2222]: I0702 00:28:21.932063 2222 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:28:21.932108 kubelet[2222]: I0702 00:28:21.932093 2222 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:28:21.932336 kubelet[2222]: I0702 00:28:21.932315 2222 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:28:21.947659 kubelet[2222]: I0702 00:28:21.947610 2222 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:28:21.948512 kubelet[2222]: E0702 00:28:21.948495 2222 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.959988 kubelet[2222]: I0702 00:28:21.959664 2222 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:28:21.961026 kubelet[2222]: I0702 00:28:21.960840 2222 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:28:21.961251 kubelet[2222]: I0702 00:28:21.961002 2222 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:28:21.961366 kubelet[2222]: I0702 00:28:21.961270 2222 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:28:21.961366 kubelet[2222]: I0702 00:28:21.961285 2222 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:28:21.961442 kubelet[2222]: I0702 00:28:21.961434 2222 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:21.962039 kubelet[2222]: I0702 00:28:21.962014 2222 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:28:21.962039 kubelet[2222]: I0702 00:28:21.962031 2222 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:28:21.962109 kubelet[2222]: I0702 00:28:21.962056 2222 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:28:21.962109 kubelet[2222]: I0702 00:28:21.962079 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:28:21.964516 kubelet[2222]: W0702 00:28:21.964463 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.964584 kubelet[2222]: E0702 00:28:21.964526 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.964757 kubelet[2222]: W0702 00:28:21.964728 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.964806 kubelet[2222]: E0702 00:28:21.964761 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.967492 kubelet[2222]: I0702 00:28:21.967461 2222 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:28:21.968686 kubelet[2222]: I0702 00:28:21.968666 2222 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:28:21.968737 kubelet[2222]: W0702 00:28:21.968724 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:28:21.969394 kubelet[2222]: I0702 00:28:21.969376 2222 server.go:1264] "Started kubelet" Jul 2 00:28:21.969893 kubelet[2222]: I0702 00:28:21.969583 2222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:28:21.969981 kubelet[2222]: I0702 00:28:21.969957 2222 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:28:21.970019 kubelet[2222]: I0702 00:28:21.970002 2222 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:28:21.970977 kubelet[2222]: I0702 00:28:21.970756 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:28:21.971059 kubelet[2222]: I0702 00:28:21.971039 2222 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:28:21.973524 kubelet[2222]: E0702 00:28:21.973418 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3dd726256791 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:28:21.969356689 +0000 UTC m=+0.384468827,LastTimestamp:2024-07-02 00:28:21.969356689 +0000 UTC m=+0.384468827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:28:21.973652 kubelet[2222]: E0702 00:28:21.973545 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:21.973652 kubelet[2222]: I0702 00:28:21.973589 2222 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:28:21.973712 kubelet[2222]: I0702 00:28:21.973664 2222 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:28:21.973712 kubelet[2222]: I0702 00:28:21.973711 2222 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:28:21.974352 kubelet[2222]: W0702 00:28:21.973974 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.974352 kubelet[2222]: E0702 00:28:21.974036 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.974352 kubelet[2222]: E0702 00:28:21.974161 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="200ms" Jul 2 00:28:21.974539 kubelet[2222]: E0702 00:28:21.974518 2222 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:28:21.974680 kubelet[2222]: I0702 00:28:21.974658 2222 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:28:21.975523 kubelet[2222]: I0702 00:28:21.975505 2222 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:28:21.975523 kubelet[2222]: I0702 00:28:21.975521 2222 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:28:21.989336 kubelet[2222]: I0702 00:28:21.989101 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.990532 2222 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.990548 2222 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.990567 2222 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.991116 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.991146 2222 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:28:21.991354 kubelet[2222]: I0702 00:28:21.991168 2222 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:28:21.991354 kubelet[2222]: E0702 00:28:21.991214 2222 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:28:21.992281 kubelet[2222]: W0702 00:28:21.992191 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:21.992281 kubelet[2222]: E0702 00:28:21.992270 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:22.075622 kubelet[2222]: I0702 00:28:22.075589 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:22.075936 kubelet[2222]: E0702 00:28:22.075913 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:22.091615 kubelet[2222]: E0702 00:28:22.091594 2222 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:22.175427 kubelet[2222]: E0702 00:28:22.175374 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="400ms" Jul 2 00:28:22.278222 kubelet[2222]: I0702 00:28:22.278098 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:22.278557 kubelet[2222]: E0702 00:28:22.278409 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:22.292715 kubelet[2222]: E0702 00:28:22.292659 2222 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:22.576154 kubelet[2222]: E0702 00:28:22.576009 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="800ms" Jul 2 00:28:22.627909 kubelet[2222]: I0702 00:28:22.627834 2222 policy_none.go:49] "None policy: Start" Jul 2 00:28:22.628609 kubelet[2222]: I0702 00:28:22.628591 2222 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:28:22.628609 kubelet[2222]: I0702 00:28:22.628618 2222 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:28:22.667598 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:28:22.679645 kubelet[2222]: I0702 00:28:22.679623 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:22.679993 kubelet[2222]: E0702 00:28:22.679949 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:22.686708 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:28:22.690255 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:28:22.693077 kubelet[2222]: E0702 00:28:22.693034 2222 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:28:22.705267 kubelet[2222]: I0702 00:28:22.705223 2222 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:28:22.705697 kubelet[2222]: I0702 00:28:22.705493 2222 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:28:22.705697 kubelet[2222]: I0702 00:28:22.705643 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:28:22.706705 kubelet[2222]: E0702 00:28:22.706681 2222 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:28:22.906505 kubelet[2222]: W0702 00:28:22.906343 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:22.906505 kubelet[2222]: E0702 00:28:22.906413 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:22.941165 kubelet[2222]: W0702 00:28:22.941068 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:22.941165 kubelet[2222]: E0702 00:28:22.941162 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:23.375899 kubelet[2222]: W0702 00:28:23.375807 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:23.375899 kubelet[2222]: E0702 00:28:23.375894 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:23.376989 kubelet[2222]: E0702 00:28:23.376954 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="1.6s" Jul 2 00:28:23.481934 kubelet[2222]: I0702 00:28:23.481878 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:23.482225 kubelet[2222]: E0702 00:28:23.482198 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:23.493527 kubelet[2222]: I0702 00:28:23.493456 2222 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:28:23.494496 kubelet[2222]: I0702 00:28:23.494471 2222 topology_manager.go:215] "Topology Admit Handler" podUID="bf9a3f5932cf3518b8e16be1366b5e36" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:28:23.495427 kubelet[2222]: I0702 00:28:23.495390 2222 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:28:23.500444 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 00:28:23.525659 systemd[1]: Created slice kubepods-burstable-podbf9a3f5932cf3518b8e16be1366b5e36.slice - libcontainer container kubepods-burstable-podbf9a3f5932cf3518b8e16be1366b5e36.slice. Jul 2 00:28:23.538229 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 00:28:23.541404 kubelet[2222]: W0702 00:28:23.541366 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:23.541404 kubelet[2222]: E0702 00:28:23.541401 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:23.582919 kubelet[2222]: I0702 00:28:23.582863 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:23.582998 kubelet[2222]: I0702 00:28:23.582921 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:28:23.582998 kubelet[2222]: I0702 00:28:23.582944 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:23.582998 kubelet[2222]: I0702 00:28:23.582967 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:23.583075 kubelet[2222]: I0702 00:28:23.583043 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:23.583115 kubelet[2222]: I0702 00:28:23.583089 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:23.583146 kubelet[2222]: I0702 00:28:23.583116 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:23.583146 kubelet[2222]: I0702 00:28:23.583137 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:23.583190 kubelet[2222]: I0702 00:28:23.583153 2222 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:23.822539 kubelet[2222]: E0702 00:28:23.822462 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:23.823273 containerd[1459]: time="2024-07-02T00:28:23.823201698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:23.837712 kubelet[2222]: E0702 00:28:23.837650 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:23.838336 containerd[1459]: time="2024-07-02T00:28:23.838272366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf9a3f5932cf3518b8e16be1366b5e36,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:23.840709 kubelet[2222]: E0702 00:28:23.840684 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:23.841200 containerd[1459]: time="2024-07-02T00:28:23.841153687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:24.052699 kubelet[2222]: E0702 00:28:24.052639 2222 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:24.656603 kubelet[2222]: W0702 00:28:24.656528 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:24.656603 kubelet[2222]: E0702 00:28:24.656567 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:24.978129 kubelet[2222]: E0702 00:28:24.978069 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="3.2s" Jul 2 00:28:25.083704 kubelet[2222]: I0702 00:28:25.083660 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:25.084031 kubelet[2222]: E0702 00:28:25.084000 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:25.326540 kubelet[2222]: W0702 00:28:25.326044 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:25.326540 kubelet[2222]: E0702 00:28:25.326462 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:25.450383 kubelet[2222]: E0702 00:28:25.450218 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.160:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3dd726256791 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:28:21.969356689 +0000 UTC m=+0.384468827,LastTimestamp:2024-07-02 00:28:21.969356689 +0000 UTC m=+0.384468827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:28:25.726638 kubelet[2222]: W0702 00:28:25.726579 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:25.726638 kubelet[2222]: E0702 00:28:25.726628 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:26.464040 kubelet[2222]: W0702 00:28:26.463982 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:26.464040 kubelet[2222]: E0702 00:28:26.464034 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:28.067775 kubelet[2222]: W0702 00:28:28.067707 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:28.067775 kubelet[2222]: E0702 00:28:28.067757 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:28.178912 kubelet[2222]: E0702 00:28:28.178858 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.160:6443: connect: connection refused" interval="6.4s" Jul 2 00:28:28.180377 kubelet[2222]: E0702 00:28:28.180349 2222 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:28.285520 kubelet[2222]: I0702 00:28:28.285467 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:28.285938 kubelet[2222]: E0702 00:28:28.285901 2222 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.160:6443/api/v1/nodes\": dial tcp 10.0.0.160:6443: connect: connection refused" node="localhost" Jul 2 00:28:28.951797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597746990.mount: Deactivated successfully. Jul 2 00:28:29.275188 kubelet[2222]: W0702 00:28:29.275008 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:29.275188 kubelet[2222]: E0702 00:28:29.275062 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:29.524414 containerd[1459]: time="2024-07-02T00:28:29.524340143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:29.614543 containerd[1459]: time="2024-07-02T00:28:29.614369160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:29.723570 containerd[1459]: time="2024-07-02T00:28:29.723471353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:28:29.794412 containerd[1459]: time="2024-07-02T00:28:29.794310425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:29.867756 kubelet[2222]: W0702 00:28:29.867567 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:29.867756 kubelet[2222]: E0702 00:28:29.867625 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:29.909313 containerd[1459]: time="2024-07-02T00:28:29.909194238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:28:30.019483 containerd[1459]: time="2024-07-02T00:28:30.019408696Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:30.074400 containerd[1459]: time="2024-07-02T00:28:30.074325197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:28:30.145067 containerd[1459]: time="2024-07-02T00:28:30.144909564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:28:30.146031 containerd[1459]: time="2024-07-02T00:28:30.145987207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.322639829s" Jul 2 00:28:30.214892 containerd[1459]: time="2024-07-02T00:28:30.214834748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.373593559s" Jul 2 00:28:30.215638 containerd[1459]: time="2024-07-02T00:28:30.215586065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.377191128s" Jul 2 00:28:30.833391 update_engine[1445]: I0702 00:28:30.833328 1445 update_attempter.cc:509] Updating boot flags... Jul 2 00:28:30.849426 kubelet[2222]: W0702 00:28:30.849382 2222 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:30.849426 kubelet[2222]: E0702 00:28:30.849427 2222 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.160:6443: connect: connection refused Jul 2 00:28:30.890693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2268) Jul 2 00:28:31.029307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2272) Jul 2 00:28:31.098275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2272) Jul 2 00:28:31.334062 containerd[1459]: time="2024-07-02T00:28:31.333909060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:31.334062 containerd[1459]: time="2024-07-02T00:28:31.333993969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.334062 containerd[1459]: time="2024-07-02T00:28:31.334023537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:31.334631 containerd[1459]: time="2024-07-02T00:28:31.334040256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.338187 containerd[1459]: time="2024-07-02T00:28:31.335189502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:31.338187 containerd[1459]: time="2024-07-02T00:28:31.335276243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.338187 containerd[1459]: time="2024-07-02T00:28:31.335296782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:31.338187 containerd[1459]: time="2024-07-02T00:28:31.335311375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.360802 containerd[1459]: time="2024-07-02T00:28:31.360609595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:31.362068 containerd[1459]: time="2024-07-02T00:28:31.360920860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.364597 containerd[1459]: time="2024-07-02T00:28:31.364506218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:31.364597 containerd[1459]: time="2024-07-02T00:28:31.364560033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:31.366589 systemd[1]: Started cri-containerd-55d1085ade86ee77c252dc8487d67c97204909244e3260b00ab937d20e33f7d4.scope - libcontainer container 55d1085ade86ee77c252dc8487d67c97204909244e3260b00ab937d20e33f7d4. Jul 2 00:28:31.368660 systemd[1]: Started cri-containerd-ce118a9d0ebf3ffa605edaf114fb2315f3cc3ac736f38423f45a37fbc7790c15.scope - libcontainer container ce118a9d0ebf3ffa605edaf114fb2315f3cc3ac736f38423f45a37fbc7790c15. Jul 2 00:28:31.388466 systemd[1]: Started cri-containerd-f316cd40cf192ea71d1e192ea3f642855dd98576d1decdd6b6da5b92852cabce.scope - libcontainer container f316cd40cf192ea71d1e192ea3f642855dd98576d1decdd6b6da5b92852cabce. Jul 2 00:28:31.432266 containerd[1459]: time="2024-07-02T00:28:31.432210458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"55d1085ade86ee77c252dc8487d67c97204909244e3260b00ab937d20e33f7d4\"" Jul 2 00:28:31.433894 kubelet[2222]: E0702 00:28:31.433794 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:31.436196 containerd[1459]: time="2024-07-02T00:28:31.435523843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce118a9d0ebf3ffa605edaf114fb2315f3cc3ac736f38423f45a37fbc7790c15\"" Jul 2 00:28:31.436310 kubelet[2222]: E0702 00:28:31.436122 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:31.436714 containerd[1459]: time="2024-07-02T00:28:31.436685497Z" level=info msg="CreateContainer within sandbox \"55d1085ade86ee77c252dc8487d67c97204909244e3260b00ab937d20e33f7d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:28:31.438342 containerd[1459]: time="2024-07-02T00:28:31.438316515Z" level=info msg="CreateContainer within sandbox \"ce118a9d0ebf3ffa605edaf114fb2315f3cc3ac736f38423f45a37fbc7790c15\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:28:31.438821 containerd[1459]: time="2024-07-02T00:28:31.438800922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf9a3f5932cf3518b8e16be1366b5e36,Namespace:kube-system,Attempt:0,} returns sandbox id \"f316cd40cf192ea71d1e192ea3f642855dd98576d1decdd6b6da5b92852cabce\"" Jul 2 00:28:31.439740 kubelet[2222]: E0702 00:28:31.439719 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:31.441413 containerd[1459]: time="2024-07-02T00:28:31.441387144Z" level=info msg="CreateContainer within sandbox \"f316cd40cf192ea71d1e192ea3f642855dd98576d1decdd6b6da5b92852cabce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:28:31.751666 containerd[1459]: time="2024-07-02T00:28:31.751607517Z" level=info msg="CreateContainer within sandbox \"55d1085ade86ee77c252dc8487d67c97204909244e3260b00ab937d20e33f7d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"baee156f3dddbc48f9efed189bd2d36b34c6c783c7eb7d138c85f701b2a4b9b7\"" Jul 2 00:28:31.752413 containerd[1459]: time="2024-07-02T00:28:31.752365381Z" level=info msg="StartContainer for \"baee156f3dddbc48f9efed189bd2d36b34c6c783c7eb7d138c85f701b2a4b9b7\"" Jul 2 00:28:31.753797 containerd[1459]: time="2024-07-02T00:28:31.753758284Z" level=info msg="CreateContainer within sandbox \"ce118a9d0ebf3ffa605edaf114fb2315f3cc3ac736f38423f45a37fbc7790c15\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f8a71e490deafa5e95c5ff56a6e9118c90b715f704b8083d1aa6f73d67b247b\"" Jul 2 00:28:31.754120 containerd[1459]: time="2024-07-02T00:28:31.754073588Z" level=info msg="StartContainer for \"2f8a71e490deafa5e95c5ff56a6e9118c90b715f704b8083d1aa6f73d67b247b\"" Jul 2 00:28:31.755898 containerd[1459]: time="2024-07-02T00:28:31.755856760Z" level=info msg="CreateContainer within sandbox \"f316cd40cf192ea71d1e192ea3f642855dd98576d1decdd6b6da5b92852cabce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b913ae87a0a2e9074e6b83cf01329c635506ec95e407423881cc0d77c7f91aa\"" Jul 2 00:28:31.756190 containerd[1459]: time="2024-07-02T00:28:31.756167393Z" level=info msg="StartContainer for \"0b913ae87a0a2e9074e6b83cf01329c635506ec95e407423881cc0d77c7f91aa\"" Jul 2 00:28:31.780474 systemd[1]: Started cri-containerd-2f8a71e490deafa5e95c5ff56a6e9118c90b715f704b8083d1aa6f73d67b247b.scope - libcontainer container 2f8a71e490deafa5e95c5ff56a6e9118c90b715f704b8083d1aa6f73d67b247b. Jul 2 00:28:31.783847 systemd[1]: Started cri-containerd-baee156f3dddbc48f9efed189bd2d36b34c6c783c7eb7d138c85f701b2a4b9b7.scope - libcontainer container baee156f3dddbc48f9efed189bd2d36b34c6c783c7eb7d138c85f701b2a4b9b7. Jul 2 00:28:31.787809 systemd[1]: Started cri-containerd-0b913ae87a0a2e9074e6b83cf01329c635506ec95e407423881cc0d77c7f91aa.scope - libcontainer container 0b913ae87a0a2e9074e6b83cf01329c635506ec95e407423881cc0d77c7f91aa. Jul 2 00:28:31.838525 containerd[1459]: time="2024-07-02T00:28:31.838462335Z" level=info msg="StartContainer for \"2f8a71e490deafa5e95c5ff56a6e9118c90b715f704b8083d1aa6f73d67b247b\" returns successfully" Jul 2 00:28:31.838694 containerd[1459]: time="2024-07-02T00:28:31.838601789Z" level=info msg="StartContainer for \"baee156f3dddbc48f9efed189bd2d36b34c6c783c7eb7d138c85f701b2a4b9b7\" returns successfully" Jul 2 00:28:31.846887 containerd[1459]: time="2024-07-02T00:28:31.846840160Z" level=info msg="StartContainer for \"0b913ae87a0a2e9074e6b83cf01329c635506ec95e407423881cc0d77c7f91aa\" returns successfully" Jul 2 00:28:32.018501 kubelet[2222]: E0702 00:28:32.018384 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:32.020269 kubelet[2222]: E0702 00:28:32.020217 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:32.030464 kubelet[2222]: E0702 00:28:32.030344 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:32.707300 kubelet[2222]: E0702 00:28:32.706744 2222 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:28:33.031965 kubelet[2222]: E0702 00:28:33.031787 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:33.031965 kubelet[2222]: E0702 00:28:33.031904 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:33.927941 kubelet[2222]: E0702 00:28:33.927892 2222 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:28:34.032469 kubelet[2222]: E0702 00:28:34.032439 2222 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:34.282986 kubelet[2222]: E0702 00:28:34.282859 2222 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 00:28:34.605559 kubelet[2222]: E0702 00:28:34.605433 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:28:34.688106 kubelet[2222]: I0702 00:28:34.688046 2222 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:34.741103 kubelet[2222]: I0702 00:28:34.741057 2222 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:28:34.773102 kubelet[2222]: E0702 00:28:34.773059 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:34.873845 kubelet[2222]: E0702 00:28:34.873665 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:34.974589 kubelet[2222]: E0702 00:28:34.974545 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.075442 kubelet[2222]: E0702 00:28:35.075395 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.175875 kubelet[2222]: E0702 00:28:35.175830 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.276393 kubelet[2222]: E0702 00:28:35.276349 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.376907 kubelet[2222]: E0702 00:28:35.376857 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.477465 kubelet[2222]: E0702 00:28:35.477333 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.578342 kubelet[2222]: E0702 00:28:35.578294 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.679323 kubelet[2222]: E0702 00:28:35.679271 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.780324 kubelet[2222]: E0702 00:28:35.780164 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.881211 kubelet[2222]: E0702 00:28:35.881159 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:35.981503 kubelet[2222]: E0702 00:28:35.981435 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:36.015178 systemd[1]: Reloading requested from client PID 2514 ('systemctl') (unit session-9.scope)... Jul 2 00:28:36.015192 systemd[1]: Reloading... Jul 2 00:28:36.082116 kubelet[2222]: E0702 00:28:36.081994 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:36.091273 zram_generator::config[2554]: No configuration found. Jul 2 00:28:36.182822 kubelet[2222]: E0702 00:28:36.182743 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:36.233468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:28:36.283339 kubelet[2222]: E0702 00:28:36.283292 2222 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:28:36.325549 systemd[1]: Reloading finished in 309 ms. Jul 2 00:28:36.372082 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:36.387871 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:28:36.388177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:36.397603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:28:36.539220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:28:36.544140 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:28:36.592937 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:36.594255 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:28:36.594255 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:28:36.594255 kubelet[2596]: I0702 00:28:36.593432 2596 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:28:36.598046 kubelet[2596]: I0702 00:28:36.598018 2596 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:28:36.598046 kubelet[2596]: I0702 00:28:36.598039 2596 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:28:36.598228 kubelet[2596]: I0702 00:28:36.598206 2596 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:28:36.599290 kubelet[2596]: I0702 00:28:36.599267 2596 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:28:36.600259 kubelet[2596]: I0702 00:28:36.600221 2596 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:28:36.612231 kubelet[2596]: I0702 00:28:36.612198 2596 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:28:36.612502 kubelet[2596]: I0702 00:28:36.612460 2596 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:28:36.612648 kubelet[2596]: I0702 00:28:36.612497 2596 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:28:36.612724 kubelet[2596]: I0702 00:28:36.612660 2596 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:28:36.612724 kubelet[2596]: I0702 00:28:36.612669 2596 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:28:36.612724 kubelet[2596]: I0702 00:28:36.612709 2596 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:36.612840 kubelet[2596]: I0702 00:28:36.612828 2596 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:28:36.612880 kubelet[2596]: I0702 00:28:36.612843 2596 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:28:36.612880 kubelet[2596]: I0702 00:28:36.612871 2596 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:28:36.612945 kubelet[2596]: I0702 00:28:36.612890 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:28:36.617198 kubelet[2596]: I0702 00:28:36.617173 2596 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:28:36.617529 kubelet[2596]: I0702 00:28:36.617491 2596 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:28:36.620266 kubelet[2596]: I0702 00:28:36.618132 2596 server.go:1264] "Started kubelet" Jul 2 00:28:36.620266 kubelet[2596]: I0702 00:28:36.618298 2596 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:28:36.620266 kubelet[2596]: I0702 00:28:36.618561 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:28:36.620266 kubelet[2596]: I0702 00:28:36.618873 2596 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:28:36.620266 kubelet[2596]: I0702 00:28:36.619286 2596 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:28:36.620679 kubelet[2596]: I0702 00:28:36.620561 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:28:36.620803 kubelet[2596]: I0702 00:28:36.620783 2596 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:28:36.621365 kubelet[2596]: I0702 00:28:36.621342 2596 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:28:36.621528 kubelet[2596]: I0702 00:28:36.621508 2596 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:28:36.623990 kubelet[2596]: I0702 00:28:36.623851 2596 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:28:36.623990 kubelet[2596]: I0702 00:28:36.623975 2596 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:28:36.625826 kubelet[2596]: I0702 00:28:36.625398 2596 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:28:36.627619 kubelet[2596]: E0702 00:28:36.626426 2596 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:28:36.638545 kubelet[2596]: I0702 00:28:36.638498 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:28:36.640158 kubelet[2596]: I0702 00:28:36.639740 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:28:36.640158 kubelet[2596]: I0702 00:28:36.639761 2596 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:28:36.640158 kubelet[2596]: I0702 00:28:36.639775 2596 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:28:36.640158 kubelet[2596]: E0702 00:28:36.639814 2596 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:28:36.659901 kubelet[2596]: I0702 00:28:36.659864 2596 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:28:36.659901 kubelet[2596]: I0702 00:28:36.659891 2596 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:28:36.659901 kubelet[2596]: I0702 00:28:36.659908 2596 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:28:36.660067 kubelet[2596]: I0702 00:28:36.660034 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:28:36.660067 kubelet[2596]: I0702 00:28:36.660043 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:28:36.660067 kubelet[2596]: I0702 00:28:36.660058 2596 policy_none.go:49] "None policy: Start" Jul 2 00:28:36.660471 kubelet[2596]: I0702 00:28:36.660445 2596 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:28:36.660471 kubelet[2596]: I0702 00:28:36.660461 2596 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:28:36.660608 kubelet[2596]: I0702 00:28:36.660594 2596 state_mem.go:75] "Updated machine memory state" Jul 2 00:28:36.664130 kubelet[2596]: I0702 00:28:36.664054 2596 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:28:36.664338 kubelet[2596]: I0702 00:28:36.664292 2596 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:28:36.664564 kubelet[2596]: I0702 00:28:36.664404 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:28:36.724692 kubelet[2596]: I0702 00:28:36.724664 2596 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:28:36.740253 kubelet[2596]: I0702 00:28:36.740198 2596 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:28:36.740377 kubelet[2596]: I0702 00:28:36.740309 2596 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:28:36.740377 kubelet[2596]: I0702 00:28:36.740359 2596 topology_manager.go:215] "Topology Admit Handler" podUID="bf9a3f5932cf3518b8e16be1366b5e36" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:28:36.742298 kubelet[2596]: I0702 00:28:36.742278 2596 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:28:36.742355 kubelet[2596]: I0702 00:28:36.742340 2596 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:28:36.822845 kubelet[2596]: I0702 00:28:36.822799 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:36.822845 kubelet[2596]: I0702 00:28:36.822832 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:36.822845 kubelet[2596]: I0702 00:28:36.822854 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:36.823058 kubelet[2596]: I0702 00:28:36.822870 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:28:36.823058 kubelet[2596]: I0702 00:28:36.822884 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:36.823058 kubelet[2596]: I0702 00:28:36.822899 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:36.823058 kubelet[2596]: I0702 00:28:36.822912 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:36.823058 kubelet[2596]: I0702 00:28:36.822927 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:28:36.823172 kubelet[2596]: I0702 00:28:36.822951 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf9a3f5932cf3518b8e16be1366b5e36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf9a3f5932cf3518b8e16be1366b5e36\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:37.053430 kubelet[2596]: E0702 00:28:37.053395 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.053430 kubelet[2596]: E0702 00:28:37.053413 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.053430 kubelet[2596]: E0702 00:28:37.053437 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.423423 sudo[2633]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:28:37.423748 sudo[2633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:28:37.614174 kubelet[2596]: I0702 00:28:37.614129 2596 apiserver.go:52] "Watching apiserver" Jul 2 00:28:37.621492 kubelet[2596]: I0702 00:28:37.621450 2596 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:28:37.648547 kubelet[2596]: E0702 00:28:37.647836 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.648864 kubelet[2596]: E0702 00:28:37.648836 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.668660 kubelet[2596]: I0702 00:28:37.668588 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.668573291 podStartE2EDuration="1.668573291s" podCreationTimestamp="2024-07-02 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:37.66838783 +0000 UTC m=+1.119710736" watchObservedRunningTime="2024-07-02 00:28:37.668573291 +0000 UTC m=+1.119896197" Jul 2 00:28:37.697178 kubelet[2596]: E0702 00:28:37.697034 2596 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:28:37.697499 kubelet[2596]: E0702 00:28:37.697470 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:37.952029 kubelet[2596]: I0702 00:28:37.951052 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9510309110000001 podStartE2EDuration="1.951030911s" podCreationTimestamp="2024-07-02 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:37.730733406 +0000 UTC m=+1.182056312" watchObservedRunningTime="2024-07-02 00:28:37.951030911 +0000 UTC m=+1.402353817" Jul 2 00:28:37.968531 sudo[2633]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:38.005650 kubelet[2596]: I0702 00:28:38.005538 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.005514426 podStartE2EDuration="2.005514426s" podCreationTimestamp="2024-07-02 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:37.951367397 +0000 UTC m=+1.402690303" watchObservedRunningTime="2024-07-02 00:28:38.005514426 +0000 UTC m=+1.456837332" Jul 2 00:28:38.648600 kubelet[2596]: E0702 00:28:38.648561 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:39.934687 sudo[1653]: pam_unix(sudo:session): session closed for user root Jul 2 00:28:39.936833 sshd[1647]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:39.939936 systemd[1]: sshd@8-10.0.0.160:22-10.0.0.1:43366.service: Deactivated successfully. Jul 2 00:28:39.942121 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:28:39.942362 systemd[1]: session-9.scope: Consumed 5.049s CPU time, 140.6M memory peak, 0B memory swap peak. Jul 2 00:28:39.944012 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:28:39.945083 systemd-logind[1439]: Removed session 9. Jul 2 00:28:41.551662 kubelet[2596]: E0702 00:28:41.551602 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:41.654555 kubelet[2596]: E0702 00:28:41.654513 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:42.012617 kubelet[2596]: E0702 00:28:42.012552 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:42.656271 kubelet[2596]: E0702 00:28:42.656221 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:46.667266 kubelet[2596]: E0702 00:28:46.667181 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:47.662381 kubelet[2596]: E0702 00:28:47.662351 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:51.666793 kubelet[2596]: I0702 00:28:51.666750 2596 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:28:51.667556 containerd[1459]: time="2024-07-02T00:28:51.667495234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:28:51.667939 kubelet[2596]: I0702 00:28:51.667679 2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:28:52.520364 kubelet[2596]: I0702 00:28:52.519456 2596 topology_manager.go:215] "Topology Admit Handler" podUID="4e3e385b-a424-44f7-abb5-53c759ffb025" podNamespace="kube-system" podName="kube-proxy-d7m28" Jul 2 00:28:52.522045 kubelet[2596]: I0702 00:28:52.522002 2596 topology_manager.go:215] "Topology Admit Handler" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" podNamespace="kube-system" podName="cilium-vrg9k" Jul 2 00:28:52.530328 systemd[1]: Created slice kubepods-besteffort-pod4e3e385b_a424_44f7_abb5_53c759ffb025.slice - libcontainer container kubepods-besteffort-pod4e3e385b_a424_44f7_abb5_53c759ffb025.slice. Jul 2 00:28:52.541028 systemd[1]: Created slice kubepods-burstable-pod3149db9c_7900_459d_892a_d7bf357fc1d6.slice - libcontainer container kubepods-burstable-pod3149db9c_7900_459d_892a_d7bf357fc1d6.slice. Jul 2 00:28:52.546916 kubelet[2596]: I0702 00:28:52.546891 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e3e385b-a424-44f7-abb5-53c759ffb025-kube-proxy\") pod \"kube-proxy-d7m28\" (UID: \"4e3e385b-a424-44f7-abb5-53c759ffb025\") " pod="kube-system/kube-proxy-d7m28" Jul 2 00:28:52.546994 kubelet[2596]: I0702 00:28:52.546920 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e3e385b-a424-44f7-abb5-53c759ffb025-xtables-lock\") pod \"kube-proxy-d7m28\" (UID: \"4e3e385b-a424-44f7-abb5-53c759ffb025\") " pod="kube-system/kube-proxy-d7m28" Jul 2 00:28:52.546994 kubelet[2596]: I0702 00:28:52.546938 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e3e385b-a424-44f7-abb5-53c759ffb025-lib-modules\") pod \"kube-proxy-d7m28\" (UID: \"4e3e385b-a424-44f7-abb5-53c759ffb025\") " pod="kube-system/kube-proxy-d7m28" Jul 2 00:28:52.546994 kubelet[2596]: I0702 00:28:52.546952 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-bpf-maps\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.546994 kubelet[2596]: I0702 00:28:52.546969 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-cgroup\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.546994 kubelet[2596]: I0702 00:28:52.546983 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-run\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.547131 kubelet[2596]: I0702 00:28:52.546998 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hwc\" (UniqueName: \"kubernetes.io/projected/4e3e385b-a424-44f7-abb5-53c759ffb025-kube-api-access-c6hwc\") pod \"kube-proxy-d7m28\" (UID: \"4e3e385b-a424-44f7-abb5-53c759ffb025\") " pod="kube-system/kube-proxy-d7m28" Jul 2 00:28:52.547131 kubelet[2596]: I0702 00:28:52.547014 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-hostproc\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.547131 kubelet[2596]: I0702 00:28:52.547029 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cni-path\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648023 kubelet[2596]: I0702 00:28:52.647986 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-config-path\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648191 kubelet[2596]: I0702 00:28:52.648049 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-net\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648262 kubelet[2596]: I0702 00:28:52.648211 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-hubble-tls\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648302 kubelet[2596]: I0702 00:28:52.648282 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-etc-cni-netd\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648391 kubelet[2596]: I0702 00:28:52.648342 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3149db9c-7900-459d-892a-d7bf357fc1d6-clustermesh-secrets\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648391 kubelet[2596]: I0702 00:28:52.648375 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-kernel\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648557 kubelet[2596]: I0702 00:28:52.648412 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svghq\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-kube-api-access-svghq\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648557 kubelet[2596]: I0702 00:28:52.648488 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-lib-modules\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.648557 kubelet[2596]: I0702 00:28:52.648511 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-xtables-lock\") pod \"cilium-vrg9k\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " pod="kube-system/cilium-vrg9k" Jul 2 00:28:52.664925 kubelet[2596]: I0702 00:28:52.664875 2596 topology_manager.go:215] "Topology Admit Handler" podUID="e3bb11d0-6216-43fc-b37b-11caa1099265" podNamespace="kube-system" podName="cilium-operator-599987898-dgdwf" Jul 2 00:28:52.677532 systemd[1]: Created slice kubepods-besteffort-pode3bb11d0_6216_43fc_b37b_11caa1099265.slice - libcontainer container kubepods-besteffort-pode3bb11d0_6216_43fc_b37b_11caa1099265.slice. Jul 2 00:28:52.838661 kubelet[2596]: E0702 00:28:52.838534 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:52.839293 containerd[1459]: time="2024-07-02T00:28:52.839215301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7m28,Uid:4e3e385b-a424-44f7-abb5-53c759ffb025,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:52.845051 kubelet[2596]: E0702 00:28:52.845013 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:52.845642 containerd[1459]: time="2024-07-02T00:28:52.845575925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrg9k,Uid:3149db9c-7900-459d-892a-d7bf357fc1d6,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:52.849606 kubelet[2596]: I0702 00:28:52.849573 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3bb11d0-6216-43fc-b37b-11caa1099265-cilium-config-path\") pod \"cilium-operator-599987898-dgdwf\" (UID: \"e3bb11d0-6216-43fc-b37b-11caa1099265\") " pod="kube-system/cilium-operator-599987898-dgdwf" Jul 2 00:28:52.849699 kubelet[2596]: I0702 00:28:52.849612 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gx8\" (UniqueName: \"kubernetes.io/projected/e3bb11d0-6216-43fc-b37b-11caa1099265-kube-api-access-k6gx8\") pod \"cilium-operator-599987898-dgdwf\" (UID: \"e3bb11d0-6216-43fc-b37b-11caa1099265\") " pod="kube-system/cilium-operator-599987898-dgdwf" Jul 2 00:28:52.874184 containerd[1459]: time="2024-07-02T00:28:52.871892969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:52.874184 containerd[1459]: time="2024-07-02T00:28:52.872047267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:52.874184 containerd[1459]: time="2024-07-02T00:28:52.872070285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:52.874184 containerd[1459]: time="2024-07-02T00:28:52.872083482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:52.877115 containerd[1459]: time="2024-07-02T00:28:52.875804979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:52.877115 containerd[1459]: time="2024-07-02T00:28:52.875897741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:52.877115 containerd[1459]: time="2024-07-02T00:28:52.875918714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:52.877115 containerd[1459]: time="2024-07-02T00:28:52.875931610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:52.890373 systemd[1]: Started cri-containerd-c3805615557d09b0b95f7beb0069a83e2c1a68286e923c69aef6c35ce5528f33.scope - libcontainer container c3805615557d09b0b95f7beb0069a83e2c1a68286e923c69aef6c35ce5528f33. Jul 2 00:28:52.894167 systemd[1]: Started cri-containerd-aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29.scope - libcontainer container aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29. Jul 2 00:28:52.916017 containerd[1459]: time="2024-07-02T00:28:52.915969509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7m28,Uid:4e3e385b-a424-44f7-abb5-53c759ffb025,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3805615557d09b0b95f7beb0069a83e2c1a68286e923c69aef6c35ce5528f33\"" Jul 2 00:28:52.916868 kubelet[2596]: E0702 00:28:52.916835 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:52.920864 containerd[1459]: time="2024-07-02T00:28:52.920825772Z" level=info msg="CreateContainer within sandbox \"c3805615557d09b0b95f7beb0069a83e2c1a68286e923c69aef6c35ce5528f33\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:28:52.924331 containerd[1459]: time="2024-07-02T00:28:52.924276157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrg9k,Uid:3149db9c-7900-459d-892a-d7bf357fc1d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\"" Jul 2 00:28:52.927064 kubelet[2596]: E0702 00:28:52.926051 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:52.928959 containerd[1459]: time="2024-07-02T00:28:52.928928538Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:28:52.947652 containerd[1459]: time="2024-07-02T00:28:52.947607947Z" level=info msg="CreateContainer within sandbox \"c3805615557d09b0b95f7beb0069a83e2c1a68286e923c69aef6c35ce5528f33\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b383cb8542d63b5ebd61743170ac8119ec875d8114f23b7d87c47acaa3465037\"" Jul 2 00:28:52.948186 containerd[1459]: time="2024-07-02T00:28:52.948143104Z" level=info msg="StartContainer for \"b383cb8542d63b5ebd61743170ac8119ec875d8114f23b7d87c47acaa3465037\"" Jul 2 00:28:52.976421 systemd[1]: Started cri-containerd-b383cb8542d63b5ebd61743170ac8119ec875d8114f23b7d87c47acaa3465037.scope - libcontainer container b383cb8542d63b5ebd61743170ac8119ec875d8114f23b7d87c47acaa3465037. Jul 2 00:28:52.981370 kubelet[2596]: E0702 00:28:52.981336 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:52.982283 containerd[1459]: time="2024-07-02T00:28:52.981815609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dgdwf,Uid:e3bb11d0-6216-43fc-b37b-11caa1099265,Namespace:kube-system,Attempt:0,}" Jul 2 00:28:53.010392 containerd[1459]: time="2024-07-02T00:28:53.010195928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:28:53.010795 containerd[1459]: time="2024-07-02T00:28:53.010419189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:53.010795 containerd[1459]: time="2024-07-02T00:28:53.010465204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:28:53.010795 containerd[1459]: time="2024-07-02T00:28:53.010482850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:28:53.012954 containerd[1459]: time="2024-07-02T00:28:53.012906378Z" level=info msg="StartContainer for \"b383cb8542d63b5ebd61743170ac8119ec875d8114f23b7d87c47acaa3465037\" returns successfully" Jul 2 00:28:53.035491 systemd[1]: Started cri-containerd-4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf.scope - libcontainer container 4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf. Jul 2 00:28:53.080782 containerd[1459]: time="2024-07-02T00:28:53.080664079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dgdwf,Uid:e3bb11d0-6216-43fc-b37b-11caa1099265,Namespace:kube-system,Attempt:0,} returns sandbox id \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\"" Jul 2 00:28:53.081695 kubelet[2596]: E0702 00:28:53.081669 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:28:53.672869 kubelet[2596]: E0702 00:28:53.672835 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:02.620039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156228583.mount: Deactivated successfully. Jul 2 00:29:03.564618 systemd[1]: Started sshd@9-10.0.0.160:22-10.0.0.1:48192.service - OpenSSH per-connection server daemon (10.0.0.1:48192). Jul 2 00:29:03.633841 sshd[2984]: Accepted publickey for core from 10.0.0.1 port 48192 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:03.635314 sshd[2984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:03.641193 systemd-logind[1439]: New session 10 of user core. Jul 2 00:29:03.646361 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:29:03.851840 sshd[2984]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:03.855778 systemd[1]: sshd@9-10.0.0.160:22-10.0.0.1:48192.service: Deactivated successfully. Jul 2 00:29:03.857750 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:29:03.858504 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:29:03.859354 systemd-logind[1439]: Removed session 10. Jul 2 00:29:05.266957 containerd[1459]: time="2024-07-02T00:29:05.266893241Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:05.267773 containerd[1459]: time="2024-07-02T00:29:05.267707241Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735267" Jul 2 00:29:05.268896 containerd[1459]: time="2024-07-02T00:29:05.268856149Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:05.270525 containerd[1459]: time="2024-07-02T00:29:05.270485093Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.34150508s" Jul 2 00:29:05.270525 containerd[1459]: time="2024-07-02T00:29:05.270520394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 00:29:05.276062 containerd[1459]: time="2024-07-02T00:29:05.276035396Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:29:05.295080 containerd[1459]: time="2024-07-02T00:29:05.295019351Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:29:05.312388 containerd[1459]: time="2024-07-02T00:29:05.312347705Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\"" Jul 2 00:29:05.315630 containerd[1459]: time="2024-07-02T00:29:05.315603281Z" level=info msg="StartContainer for \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\"" Jul 2 00:29:05.357390 systemd[1]: Started cri-containerd-d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9.scope - libcontainer container d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9. Jul 2 00:29:05.407363 systemd[1]: cri-containerd-d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9.scope: Deactivated successfully. Jul 2 00:29:05.710313 containerd[1459]: time="2024-07-02T00:29:05.710264972Z" level=info msg="StartContainer for \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\" returns successfully" Jul 2 00:29:05.714045 kubelet[2596]: E0702 00:29:05.714014 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:05.730884 kubelet[2596]: I0702 00:29:05.730830 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d7m28" podStartSLOduration=13.730814183 podStartE2EDuration="13.730814183s" podCreationTimestamp="2024-07-02 00:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:28:53.679598371 +0000 UTC m=+17.130921277" watchObservedRunningTime="2024-07-02 00:29:05.730814183 +0000 UTC m=+29.182137089" Jul 2 00:29:05.923169 containerd[1459]: time="2024-07-02T00:29:05.923097830Z" level=info msg="shim disconnected" id=d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9 namespace=k8s.io Jul 2 00:29:05.923169 containerd[1459]: time="2024-07-02T00:29:05.923164604Z" level=warning msg="cleaning up after shim disconnected" id=d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9 namespace=k8s.io Jul 2 00:29:05.923169 containerd[1459]: time="2024-07-02T00:29:05.923174555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:06.307261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9-rootfs.mount: Deactivated successfully. Jul 2 00:29:06.717481 kubelet[2596]: E0702 00:29:06.717452 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:06.720192 containerd[1459]: time="2024-07-02T00:29:06.720139126Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:29:06.742158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839133083.mount: Deactivated successfully. Jul 2 00:29:06.831112 containerd[1459]: time="2024-07-02T00:29:06.831044951Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\"" Jul 2 00:29:06.832206 containerd[1459]: time="2024-07-02T00:29:06.832123964Z" level=info msg="StartContainer for \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\"" Jul 2 00:29:06.838482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365746693.mount: Deactivated successfully. Jul 2 00:29:06.863377 systemd[1]: Started cri-containerd-52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6.scope - libcontainer container 52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6. Jul 2 00:29:06.891642 containerd[1459]: time="2024-07-02T00:29:06.891513126Z" level=info msg="StartContainer for \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\" returns successfully" Jul 2 00:29:06.901298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:29:06.901558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:29:06.901642 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:29:06.911583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:29:06.911800 systemd[1]: cri-containerd-52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6.scope: Deactivated successfully. Jul 2 00:29:06.955170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:29:06.957470 containerd[1459]: time="2024-07-02T00:29:06.957381458Z" level=info msg="shim disconnected" id=52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6 namespace=k8s.io Jul 2 00:29:06.957470 containerd[1459]: time="2024-07-02T00:29:06.957440588Z" level=warning msg="cleaning up after shim disconnected" id=52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6 namespace=k8s.io Jul 2 00:29:06.957470 containerd[1459]: time="2024-07-02T00:29:06.957452251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:07.161880 containerd[1459]: time="2024-07-02T00:29:07.161768758Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:07.162830 containerd[1459]: time="2024-07-02T00:29:07.162793058Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jul 2 00:29:07.164131 containerd[1459]: time="2024-07-02T00:29:07.164105489Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:29:07.165409 containerd[1459]: time="2024-07-02T00:29:07.165364922Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.889297903s" Jul 2 00:29:07.165459 containerd[1459]: time="2024-07-02T00:29:07.165409823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 00:29:07.167307 containerd[1459]: time="2024-07-02T00:29:07.167263101Z" level=info msg="CreateContainer within sandbox \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:29:07.180044 containerd[1459]: time="2024-07-02T00:29:07.179982281Z" level=info msg="CreateContainer within sandbox \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\"" Jul 2 00:29:07.180524 containerd[1459]: time="2024-07-02T00:29:07.180492618Z" level=info msg="StartContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\"" Jul 2 00:29:07.212492 systemd[1]: Started cri-containerd-409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5.scope - libcontainer container 409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5. Jul 2 00:29:07.240600 containerd[1459]: time="2024-07-02T00:29:07.240561291Z" level=info msg="StartContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" returns successfully" Jul 2 00:29:07.721452 kubelet[2596]: E0702 00:29:07.721411 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.727201 kubelet[2596]: E0702 00:29:07.726885 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:07.727364 containerd[1459]: time="2024-07-02T00:29:07.727008317Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:29:07.759499 containerd[1459]: time="2024-07-02T00:29:07.757874077Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\"" Jul 2 00:29:07.761711 containerd[1459]: time="2024-07-02T00:29:07.759824761Z" level=info msg="StartContainer for \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\"" Jul 2 00:29:07.824453 systemd[1]: Started cri-containerd-cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758.scope - libcontainer container cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758. Jul 2 00:29:07.858736 systemd[1]: cri-containerd-cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758.scope: Deactivated successfully. Jul 2 00:29:07.933209 containerd[1459]: time="2024-07-02T00:29:07.933046507Z" level=info msg="StartContainer for \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\" returns successfully" Jul 2 00:29:08.306364 systemd[1]: run-containerd-runc-k8s.io-cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758-runc.DhW5kT.mount: Deactivated successfully. Jul 2 00:29:08.306477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758-rootfs.mount: Deactivated successfully. Jul 2 00:29:08.378729 containerd[1459]: time="2024-07-02T00:29:08.378661233Z" level=info msg="shim disconnected" id=cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758 namespace=k8s.io Jul 2 00:29:08.378729 containerd[1459]: time="2024-07-02T00:29:08.378722376Z" level=warning msg="cleaning up after shim disconnected" id=cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758 namespace=k8s.io Jul 2 00:29:08.378729 containerd[1459]: time="2024-07-02T00:29:08.378735422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:08.730605 kubelet[2596]: E0702 00:29:08.730356 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:08.730605 kubelet[2596]: E0702 00:29:08.730287 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:08.735264 containerd[1459]: time="2024-07-02T00:29:08.732550286Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:29:08.745908 kubelet[2596]: I0702 00:29:08.745830 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dgdwf" podStartSLOduration=2.662222839 podStartE2EDuration="16.74581283s" podCreationTimestamp="2024-07-02 00:28:52 +0000 UTC" firstStartedPulling="2024-07-02 00:28:53.082404168 +0000 UTC m=+16.533727074" lastFinishedPulling="2024-07-02 00:29:07.165994159 +0000 UTC m=+30.617317065" observedRunningTime="2024-07-02 00:29:07.784429874 +0000 UTC m=+31.235752780" watchObservedRunningTime="2024-07-02 00:29:08.74581283 +0000 UTC m=+32.197135736" Jul 2 00:29:08.763129 containerd[1459]: time="2024-07-02T00:29:08.763078721Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\"" Jul 2 00:29:08.763747 containerd[1459]: time="2024-07-02T00:29:08.763594627Z" level=info msg="StartContainer for \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\"" Jul 2 00:29:08.792374 systemd[1]: Started cri-containerd-105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d.scope - libcontainer container 105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d. Jul 2 00:29:08.815978 systemd[1]: cri-containerd-105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d.scope: Deactivated successfully. Jul 2 00:29:08.818771 containerd[1459]: time="2024-07-02T00:29:08.818718383Z" level=info msg="StartContainer for \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\" returns successfully" Jul 2 00:29:08.843407 containerd[1459]: time="2024-07-02T00:29:08.843342232Z" level=info msg="shim disconnected" id=105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d namespace=k8s.io Jul 2 00:29:08.843407 containerd[1459]: time="2024-07-02T00:29:08.843404637Z" level=warning msg="cleaning up after shim disconnected" id=105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d namespace=k8s.io Jul 2 00:29:08.843648 containerd[1459]: time="2024-07-02T00:29:08.843415288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:08.864071 systemd[1]: Started sshd@10-10.0.0.160:22-10.0.0.1:43614.service - OpenSSH per-connection server daemon (10.0.0.1:43614). Jul 2 00:29:08.902398 sshd[3307]: Accepted publickey for core from 10.0.0.1 port 43614 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:08.903922 sshd[3307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:08.908372 systemd-logind[1439]: New session 11 of user core. Jul 2 00:29:08.915361 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:29:09.029964 sshd[3307]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:09.033904 systemd[1]: sshd@10-10.0.0.160:22-10.0.0.1:43614.service: Deactivated successfully. Jul 2 00:29:09.035881 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:29:09.036498 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:29:09.037424 systemd-logind[1439]: Removed session 11. Jul 2 00:29:09.307031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d-rootfs.mount: Deactivated successfully. Jul 2 00:29:09.734456 kubelet[2596]: E0702 00:29:09.734423 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:09.737136 containerd[1459]: time="2024-07-02T00:29:09.737093514Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:29:09.785371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102260996.mount: Deactivated successfully. Jul 2 00:29:09.790576 containerd[1459]: time="2024-07-02T00:29:09.790510662Z" level=info msg="CreateContainer within sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\"" Jul 2 00:29:09.792392 containerd[1459]: time="2024-07-02T00:29:09.791353614Z" level=info msg="StartContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\"" Jul 2 00:29:09.826498 systemd[1]: Started cri-containerd-fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e.scope - libcontainer container fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e. Jul 2 00:29:09.858669 containerd[1459]: time="2024-07-02T00:29:09.858612995Z" level=info msg="StartContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" returns successfully" Jul 2 00:29:10.029013 kubelet[2596]: I0702 00:29:10.027489 2596 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:29:10.046267 kubelet[2596]: I0702 00:29:10.043741 2596 topology_manager.go:215] "Topology Admit Handler" podUID="4172b203-42e7-4bb7-be41-a42a5f7dfb9d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnqjr" Jul 2 00:29:10.046267 kubelet[2596]: I0702 00:29:10.045500 2596 topology_manager.go:215] "Topology Admit Handler" podUID="42d71f08-e1e2-4672-8f79-6ac7fc959285" podNamespace="kube-system" podName="coredns-7db6d8ff4d-675df" Jul 2 00:29:10.055394 systemd[1]: Created slice kubepods-burstable-pod4172b203_42e7_4bb7_be41_a42a5f7dfb9d.slice - libcontainer container kubepods-burstable-pod4172b203_42e7_4bb7_be41_a42a5f7dfb9d.slice. Jul 2 00:29:10.066008 systemd[1]: Created slice kubepods-burstable-pod42d71f08_e1e2_4672_8f79_6ac7fc959285.slice - libcontainer container kubepods-burstable-pod42d71f08_e1e2_4672_8f79_6ac7fc959285.slice. Jul 2 00:29:10.169967 kubelet[2596]: I0702 00:29:10.169901 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42d71f08-e1e2-4672-8f79-6ac7fc959285-config-volume\") pod \"coredns-7db6d8ff4d-675df\" (UID: \"42d71f08-e1e2-4672-8f79-6ac7fc959285\") " pod="kube-system/coredns-7db6d8ff4d-675df" Jul 2 00:29:10.169967 kubelet[2596]: I0702 00:29:10.169976 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52b66\" (UniqueName: \"kubernetes.io/projected/4172b203-42e7-4bb7-be41-a42a5f7dfb9d-kube-api-access-52b66\") pod \"coredns-7db6d8ff4d-nnqjr\" (UID: \"4172b203-42e7-4bb7-be41-a42a5f7dfb9d\") " pod="kube-system/coredns-7db6d8ff4d-nnqjr" Jul 2 00:29:10.170179 kubelet[2596]: I0702 00:29:10.170003 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrll\" (UniqueName: \"kubernetes.io/projected/42d71f08-e1e2-4672-8f79-6ac7fc959285-kube-api-access-skrll\") pod \"coredns-7db6d8ff4d-675df\" (UID: \"42d71f08-e1e2-4672-8f79-6ac7fc959285\") " pod="kube-system/coredns-7db6d8ff4d-675df" Jul 2 00:29:10.170179 kubelet[2596]: I0702 00:29:10.170026 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4172b203-42e7-4bb7-be41-a42a5f7dfb9d-config-volume\") pod \"coredns-7db6d8ff4d-nnqjr\" (UID: \"4172b203-42e7-4bb7-be41-a42a5f7dfb9d\") " pod="kube-system/coredns-7db6d8ff4d-nnqjr" Jul 2 00:29:10.361718 kubelet[2596]: E0702 00:29:10.361587 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:10.368946 kubelet[2596]: E0702 00:29:10.368908 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:10.369480 containerd[1459]: time="2024-07-02T00:29:10.369435223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-675df,Uid:42d71f08-e1e2-4672-8f79-6ac7fc959285,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:10.371176 containerd[1459]: time="2024-07-02T00:29:10.371121715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnqjr,Uid:4172b203-42e7-4bb7-be41-a42a5f7dfb9d,Namespace:kube-system,Attempt:0,}" Jul 2 00:29:10.738747 kubelet[2596]: E0702 00:29:10.738714 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:10.749443 kubelet[2596]: I0702 00:29:10.749117 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vrg9k" podStartSLOduration=6.403044821 podStartE2EDuration="18.749099135s" podCreationTimestamp="2024-07-02 00:28:52 +0000 UTC" firstStartedPulling="2024-07-02 00:28:52.928190732 +0000 UTC m=+16.379513638" lastFinishedPulling="2024-07-02 00:29:05.274245036 +0000 UTC m=+28.725567952" observedRunningTime="2024-07-02 00:29:10.748988754 +0000 UTC m=+34.200311670" watchObservedRunningTime="2024-07-02 00:29:10.749099135 +0000 UTC m=+34.200422061" Jul 2 00:29:11.745037 kubelet[2596]: E0702 00:29:11.744997 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:12.063554 systemd-networkd[1398]: cilium_host: Link UP Jul 2 00:29:12.063765 systemd-networkd[1398]: cilium_net: Link UP Jul 2 00:29:12.063770 systemd-networkd[1398]: cilium_net: Gained carrier Jul 2 00:29:12.064019 systemd-networkd[1398]: cilium_host: Gained carrier Jul 2 00:29:12.064307 systemd-networkd[1398]: cilium_host: Gained IPv6LL Jul 2 00:29:12.159330 systemd-networkd[1398]: cilium_vxlan: Link UP Jul 2 00:29:12.159339 systemd-networkd[1398]: cilium_vxlan: Gained carrier Jul 2 00:29:12.363275 kernel: NET: Registered PF_ALG protocol family Jul 2 00:29:12.746321 kubelet[2596]: E0702 00:29:12.746291 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:12.904421 systemd-networkd[1398]: cilium_net: Gained IPv6LL Jul 2 00:29:13.003976 systemd-networkd[1398]: lxc_health: Link UP Jul 2 00:29:13.014348 systemd-networkd[1398]: lxc_health: Gained carrier Jul 2 00:29:13.188888 systemd-networkd[1398]: lxc1db16fa8b29b: Link UP Jul 2 00:29:13.195612 systemd-networkd[1398]: lxce0857ed696c5: Link UP Jul 2 00:29:13.205268 kernel: eth0: renamed from tmpf9a33 Jul 2 00:29:13.213274 kernel: eth0: renamed from tmpb180d Jul 2 00:29:13.212027 systemd-networkd[1398]: lxc1db16fa8b29b: Gained carrier Jul 2 00:29:13.217841 systemd-networkd[1398]: lxce0857ed696c5: Gained carrier Jul 2 00:29:13.608871 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Jul 2 00:29:14.043682 systemd[1]: Started sshd@11-10.0.0.160:22-10.0.0.1:43618.service - OpenSSH per-connection server daemon (10.0.0.1:43618). Jul 2 00:29:14.086431 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 43618 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:14.088093 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:14.092516 systemd-logind[1439]: New session 12 of user core. Jul 2 00:29:14.108385 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:29:14.227613 sshd[3835]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:14.232299 systemd[1]: sshd@11-10.0.0.160:22-10.0.0.1:43618.service: Deactivated successfully. Jul 2 00:29:14.235021 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:29:14.236053 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:29:14.237337 systemd-logind[1439]: Removed session 12. Jul 2 00:29:14.312412 systemd-networkd[1398]: lxc1db16fa8b29b: Gained IPv6LL Jul 2 00:29:14.632367 systemd-networkd[1398]: lxce0857ed696c5: Gained IPv6LL Jul 2 00:29:14.632779 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jul 2 00:29:14.848626 kubelet[2596]: E0702 00:29:14.848593 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:16.658436 containerd[1459]: time="2024-07-02T00:29:16.658323591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:16.658436 containerd[1459]: time="2024-07-02T00:29:16.658405014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:16.658840 containerd[1459]: time="2024-07-02T00:29:16.658427760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:16.659072 containerd[1459]: time="2024-07-02T00:29:16.659030522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:16.677572 systemd[1]: run-containerd-runc-k8s.io-b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa-runc.TCBW9S.mount: Deactivated successfully. Jul 2 00:29:16.688050 systemd[1]: Started cri-containerd-b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa.scope - libcontainer container b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa. Jul 2 00:29:16.699003 containerd[1459]: time="2024-07-02T00:29:16.698846626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:29:16.699003 containerd[1459]: time="2024-07-02T00:29:16.698921466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:16.699003 containerd[1459]: time="2024-07-02T00:29:16.698956166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:29:16.699003 containerd[1459]: time="2024-07-02T00:29:16.698970093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:29:16.700089 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:29:16.718438 systemd[1]: Started cri-containerd-f9a338d321c9133f87faa2e66a82c6eb26fc6b2af96fe388e594b7edd62ee2b5.scope - libcontainer container f9a338d321c9133f87faa2e66a82c6eb26fc6b2af96fe388e594b7edd62ee2b5. Jul 2 00:29:16.731261 containerd[1459]: time="2024-07-02T00:29:16.731205760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-675df,Uid:42d71f08-e1e2-4672-8f79-6ac7fc959285,Namespace:kube-system,Attempt:0,} returns sandbox id \"b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa\"" Jul 2 00:29:16.731924 kubelet[2596]: E0702 00:29:16.731730 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:16.732144 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:29:16.733766 containerd[1459]: time="2024-07-02T00:29:16.733691063Z" level=info msg="CreateContainer within sandbox \"b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:29:16.759549 containerd[1459]: time="2024-07-02T00:29:16.759513485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnqjr,Uid:4172b203-42e7-4bb7-be41-a42a5f7dfb9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a338d321c9133f87faa2e66a82c6eb26fc6b2af96fe388e594b7edd62ee2b5\"" Jul 2 00:29:16.760811 kubelet[2596]: E0702 00:29:16.760784 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:16.761622 kubelet[2596]: I0702 00:29:16.761589 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:29:16.763458 containerd[1459]: time="2024-07-02T00:29:16.762993513Z" level=info msg="CreateContainer within sandbox \"f9a338d321c9133f87faa2e66a82c6eb26fc6b2af96fe388e594b7edd62ee2b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:29:16.763535 kubelet[2596]: E0702 00:29:16.763419 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:16.823019 containerd[1459]: time="2024-07-02T00:29:16.822967127Z" level=info msg="CreateContainer within sandbox \"f9a338d321c9133f87faa2e66a82c6eb26fc6b2af96fe388e594b7edd62ee2b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40334291e84651aec7f236ea438cdf3a4e22d765546676badb0a51285f999bb9\"" Jul 2 00:29:16.823554 containerd[1459]: time="2024-07-02T00:29:16.823514710Z" level=info msg="StartContainer for \"40334291e84651aec7f236ea438cdf3a4e22d765546676badb0a51285f999bb9\"" Jul 2 00:29:16.824690 containerd[1459]: time="2024-07-02T00:29:16.824534997Z" level=info msg="CreateContainer within sandbox \"b180d1ec92dddfbc4f140f1631fcf643f13d84f697746a3a56be3cbb3d1951aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f39e94e696f17f8b991153bf8dd1205e0f02743dcb0e11056edfc1ee0831f76\"" Jul 2 00:29:16.825214 containerd[1459]: time="2024-07-02T00:29:16.825190427Z" level=info msg="StartContainer for \"5f39e94e696f17f8b991153bf8dd1205e0f02743dcb0e11056edfc1ee0831f76\"" Jul 2 00:29:16.853370 systemd[1]: Started cri-containerd-40334291e84651aec7f236ea438cdf3a4e22d765546676badb0a51285f999bb9.scope - libcontainer container 40334291e84651aec7f236ea438cdf3a4e22d765546676badb0a51285f999bb9. Jul 2 00:29:16.855874 systemd[1]: Started cri-containerd-5f39e94e696f17f8b991153bf8dd1205e0f02743dcb0e11056edfc1ee0831f76.scope - libcontainer container 5f39e94e696f17f8b991153bf8dd1205e0f02743dcb0e11056edfc1ee0831f76. Jul 2 00:29:16.887171 containerd[1459]: time="2024-07-02T00:29:16.887127411Z" level=info msg="StartContainer for \"5f39e94e696f17f8b991153bf8dd1205e0f02743dcb0e11056edfc1ee0831f76\" returns successfully" Jul 2 00:29:16.887171 containerd[1459]: time="2024-07-02T00:29:16.887156338Z" level=info msg="StartContainer for \"40334291e84651aec7f236ea438cdf3a4e22d765546676badb0a51285f999bb9\" returns successfully" Jul 2 00:29:17.760012 kubelet[2596]: E0702 00:29:17.759911 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:17.764082 kubelet[2596]: E0702 00:29:17.764054 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:17.764206 kubelet[2596]: E0702 00:29:17.764185 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:17.768515 kubelet[2596]: I0702 00:29:17.768073 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-675df" podStartSLOduration=25.768055702 podStartE2EDuration="25.768055702s" podCreationTimestamp="2024-07-02 00:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:29:17.767836264 +0000 UTC m=+41.219159170" watchObservedRunningTime="2024-07-02 00:29:17.768055702 +0000 UTC m=+41.219378608" Jul 2 00:29:17.777259 kubelet[2596]: I0702 00:29:17.777190 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nnqjr" podStartSLOduration=25.777170671 podStartE2EDuration="25.777170671s" podCreationTimestamp="2024-07-02 00:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:29:17.777141544 +0000 UTC m=+41.228464460" watchObservedRunningTime="2024-07-02 00:29:17.777170671 +0000 UTC m=+41.228493577" Jul 2 00:29:18.765549 kubelet[2596]: E0702 00:29:18.765517 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:19.242114 systemd[1]: Started sshd@12-10.0.0.160:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). Jul 2 00:29:19.280072 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:19.281603 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:19.285466 systemd-logind[1439]: New session 13 of user core. Jul 2 00:29:19.299397 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:29:19.418611 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:19.433390 systemd[1]: sshd@12-10.0.0.160:22-10.0.0.1:41202.service: Deactivated successfully. Jul 2 00:29:19.435317 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:29:19.437094 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:29:19.443524 systemd[1]: Started sshd@13-10.0.0.160:22-10.0.0.1:41228.service - OpenSSH per-connection server daemon (10.0.0.1:41228). Jul 2 00:29:19.444496 systemd-logind[1439]: Removed session 13. Jul 2 00:29:19.472382 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 41228 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:19.473785 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:19.477477 systemd-logind[1439]: New session 14 of user core. Jul 2 00:29:19.492367 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:29:19.704112 sshd[4036]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:19.715817 systemd[1]: sshd@13-10.0.0.160:22-10.0.0.1:41228.service: Deactivated successfully. Jul 2 00:29:19.719360 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:29:19.722122 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:29:19.730580 systemd[1]: Started sshd@14-10.0.0.160:22-10.0.0.1:41244.service - OpenSSH per-connection server daemon (10.0.0.1:41244). Jul 2 00:29:19.731565 systemd-logind[1439]: Removed session 14. Jul 2 00:29:19.764254 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 41244 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:19.766030 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:19.766790 kubelet[2596]: E0702 00:29:19.766760 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:19.770735 systemd-logind[1439]: New session 15 of user core. Jul 2 00:29:19.781400 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:29:19.897669 sshd[4050]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:19.901411 systemd[1]: sshd@14-10.0.0.160:22-10.0.0.1:41244.service: Deactivated successfully. Jul 2 00:29:19.903224 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:29:19.903892 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:29:19.904801 systemd-logind[1439]: Removed session 15. Jul 2 00:29:20.362804 kubelet[2596]: E0702 00:29:20.362718 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:20.768590 kubelet[2596]: E0702 00:29:20.768552 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:24.909522 systemd[1]: Started sshd@15-10.0.0.160:22-10.0.0.1:41258.service - OpenSSH per-connection server daemon (10.0.0.1:41258). Jul 2 00:29:24.943824 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 41258 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:24.945326 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:24.949423 systemd-logind[1439]: New session 16 of user core. Jul 2 00:29:24.956389 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:29:25.060854 sshd[4072]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:25.065073 systemd[1]: sshd@15-10.0.0.160:22-10.0.0.1:41258.service: Deactivated successfully. Jul 2 00:29:25.066897 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:29:25.067695 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:29:25.068739 systemd-logind[1439]: Removed session 16. Jul 2 00:29:30.074066 systemd[1]: Started sshd@16-10.0.0.160:22-10.0.0.1:59366.service - OpenSSH per-connection server daemon (10.0.0.1:59366). Jul 2 00:29:30.107880 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 59366 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:30.109402 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:30.113275 systemd-logind[1439]: New session 17 of user core. Jul 2 00:29:30.123362 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:29:30.226212 sshd[4090]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:30.238212 systemd[1]: sshd@16-10.0.0.160:22-10.0.0.1:59366.service: Deactivated successfully. Jul 2 00:29:30.240119 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:29:30.241840 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:29:30.248490 systemd[1]: Started sshd@17-10.0.0.160:22-10.0.0.1:59370.service - OpenSSH per-connection server daemon (10.0.0.1:59370). Jul 2 00:29:30.249380 systemd-logind[1439]: Removed session 17. Jul 2 00:29:30.278139 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 59370 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:30.279642 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:30.283587 systemd-logind[1439]: New session 18 of user core. Jul 2 00:29:30.290397 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:29:30.527745 sshd[4104]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:30.541796 systemd[1]: sshd@17-10.0.0.160:22-10.0.0.1:59370.service: Deactivated successfully. Jul 2 00:29:30.544146 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:29:30.546274 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:29:30.554516 systemd[1]: Started sshd@18-10.0.0.160:22-10.0.0.1:59374.service - OpenSSH per-connection server daemon (10.0.0.1:59374). Jul 2 00:29:30.555472 systemd-logind[1439]: Removed session 18. Jul 2 00:29:30.591037 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 59374 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:30.592652 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:30.596703 systemd-logind[1439]: New session 19 of user core. Jul 2 00:29:30.610350 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:29:32.187411 sshd[4116]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:32.199600 systemd[1]: sshd@18-10.0.0.160:22-10.0.0.1:59374.service: Deactivated successfully. Jul 2 00:29:32.201699 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:29:32.203350 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:29:32.210008 systemd[1]: Started sshd@19-10.0.0.160:22-10.0.0.1:59388.service - OpenSSH per-connection server daemon (10.0.0.1:59388). Jul 2 00:29:32.212435 systemd-logind[1439]: Removed session 19. Jul 2 00:29:32.243096 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 59388 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:32.244923 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:32.249342 systemd-logind[1439]: New session 20 of user core. Jul 2 00:29:32.266548 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:29:32.500593 sshd[4139]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:32.513327 systemd[1]: sshd@19-10.0.0.160:22-10.0.0.1:59388.service: Deactivated successfully. Jul 2 00:29:32.515399 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:29:32.517154 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:29:32.526593 systemd[1]: Started sshd@20-10.0.0.160:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). Jul 2 00:29:32.527514 systemd-logind[1439]: Removed session 20. Jul 2 00:29:32.555177 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:32.556625 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:32.560223 systemd-logind[1439]: New session 21 of user core. Jul 2 00:29:32.570375 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:29:32.693636 sshd[4153]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:32.697459 systemd[1]: sshd@20-10.0.0.160:22-10.0.0.1:59396.service: Deactivated successfully. Jul 2 00:29:32.699718 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:29:32.700470 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:29:32.701364 systemd-logind[1439]: Removed session 21. Jul 2 00:29:37.703930 systemd[1]: Started sshd@21-10.0.0.160:22-10.0.0.1:59404.service - OpenSSH per-connection server daemon (10.0.0.1:59404). Jul 2 00:29:37.738076 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 59404 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:37.739586 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:37.743366 systemd-logind[1439]: New session 22 of user core. Jul 2 00:29:37.755359 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:29:37.853807 sshd[4169]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:37.857291 systemd[1]: sshd@21-10.0.0.160:22-10.0.0.1:59404.service: Deactivated successfully. Jul 2 00:29:37.859009 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:29:37.859538 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:29:37.860266 systemd-logind[1439]: Removed session 22. Jul 2 00:29:42.865032 systemd[1]: Started sshd@22-10.0.0.160:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Jul 2 00:29:42.897807 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:42.899271 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:42.902719 systemd-logind[1439]: New session 23 of user core. Jul 2 00:29:42.913388 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:29:43.048434 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:43.052348 systemd[1]: sshd@22-10.0.0.160:22-10.0.0.1:50622.service: Deactivated successfully. Jul 2 00:29:43.054111 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:29:43.054688 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:29:43.055622 systemd-logind[1439]: Removed session 23. Jul 2 00:29:43.640965 kubelet[2596]: E0702 00:29:43.640925 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:48.062291 systemd[1]: Started sshd@23-10.0.0.160:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). Jul 2 00:29:48.097933 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:48.099512 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:48.103460 systemd-logind[1439]: New session 24 of user core. Jul 2 00:29:48.116371 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:29:48.222641 sshd[4201]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:48.226868 systemd[1]: sshd@23-10.0.0.160:22-10.0.0.1:39050.service: Deactivated successfully. Jul 2 00:29:48.228902 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:29:48.229782 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:29:48.230977 systemd-logind[1439]: Removed session 24. Jul 2 00:29:50.641695 kubelet[2596]: E0702 00:29:50.641648 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:29:53.234058 systemd[1]: Started sshd@24-10.0.0.160:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). Jul 2 00:29:53.267216 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:53.268568 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:53.272373 systemd-logind[1439]: New session 25 of user core. Jul 2 00:29:53.286383 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:29:53.393140 sshd[4217]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:53.397133 systemd[1]: sshd@24-10.0.0.160:22-10.0.0.1:39066.service: Deactivated successfully. Jul 2 00:29:53.399293 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:29:53.399838 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:29:53.400760 systemd-logind[1439]: Removed session 25. Jul 2 00:29:58.404206 systemd[1]: Started sshd@25-10.0.0.160:22-10.0.0.1:45834.service - OpenSSH per-connection server daemon (10.0.0.1:45834). Jul 2 00:29:58.437875 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 45834 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:58.439674 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:58.443778 systemd-logind[1439]: New session 26 of user core. Jul 2 00:29:58.453381 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:29:58.558694 sshd[4231]: pam_unix(sshd:session): session closed for user core Jul 2 00:29:58.575228 systemd[1]: sshd@25-10.0.0.160:22-10.0.0.1:45834.service: Deactivated successfully. Jul 2 00:29:58.577142 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:29:58.578916 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:29:58.584456 systemd[1]: Started sshd@26-10.0.0.160:22-10.0.0.1:45836.service - OpenSSH per-connection server daemon (10.0.0.1:45836). Jul 2 00:29:58.585591 systemd-logind[1439]: Removed session 26. Jul 2 00:29:58.614189 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 45836 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:29:58.615756 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:29:58.619757 systemd-logind[1439]: New session 27 of user core. Jul 2 00:29:58.626386 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:30:00.031543 containerd[1459]: time="2024-07-02T00:30:00.031484728Z" level=info msg="StopContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" with timeout 30 (s)" Jul 2 00:30:00.037219 containerd[1459]: time="2024-07-02T00:30:00.037191484Z" level=info msg="Stop container \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" with signal terminated" Jul 2 00:30:00.050142 systemd[1]: cri-containerd-409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5.scope: Deactivated successfully. Jul 2 00:30:00.059061 containerd[1459]: time="2024-07-02T00:30:00.058943602Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:30:00.066981 containerd[1459]: time="2024-07-02T00:30:00.066921488Z" level=info msg="StopContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" with timeout 2 (s)" Jul 2 00:30:00.067341 containerd[1459]: time="2024-07-02T00:30:00.067275786Z" level=info msg="Stop container \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" with signal terminated" Jul 2 00:30:00.073975 systemd-networkd[1398]: lxc_health: Link DOWN Jul 2 00:30:00.073983 systemd-networkd[1398]: lxc_health: Lost carrier Jul 2 00:30:00.075566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5-rootfs.mount: Deactivated successfully. Jul 2 00:30:00.086151 containerd[1459]: time="2024-07-02T00:30:00.086092033Z" level=info msg="shim disconnected" id=409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5 namespace=k8s.io Jul 2 00:30:00.086151 containerd[1459]: time="2024-07-02T00:30:00.086142019Z" level=warning msg="cleaning up after shim disconnected" id=409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5 namespace=k8s.io Jul 2 00:30:00.086151 containerd[1459]: time="2024-07-02T00:30:00.086150174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:00.108200 systemd[1]: cri-containerd-fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e.scope: Deactivated successfully. Jul 2 00:30:00.108560 systemd[1]: cri-containerd-fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e.scope: Consumed 6.818s CPU time. Jul 2 00:30:00.111401 containerd[1459]: time="2024-07-02T00:30:00.111370220Z" level=info msg="StopContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" returns successfully" Jul 2 00:30:00.116518 containerd[1459]: time="2024-07-02T00:30:00.116496777Z" level=info msg="StopPodSandbox for \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\"" Jul 2 00:30:00.116589 containerd[1459]: time="2024-07-02T00:30:00.116533918Z" level=info msg="Container to stop \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.120136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf-shm.mount: Deactivated successfully. Jul 2 00:30:00.122317 systemd[1]: cri-containerd-4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf.scope: Deactivated successfully. Jul 2 00:30:00.127625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e-rootfs.mount: Deactivated successfully. Jul 2 00:30:00.140680 containerd[1459]: time="2024-07-02T00:30:00.140625527Z" level=info msg="shim disconnected" id=fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e namespace=k8s.io Jul 2 00:30:00.140906 containerd[1459]: time="2024-07-02T00:30:00.140879292Z" level=warning msg="cleaning up after shim disconnected" id=fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e namespace=k8s.io Jul 2 00:30:00.140906 containerd[1459]: time="2024-07-02T00:30:00.140896606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:00.147308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf-rootfs.mount: Deactivated successfully. Jul 2 00:30:00.149636 containerd[1459]: time="2024-07-02T00:30:00.149514163Z" level=info msg="shim disconnected" id=4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf namespace=k8s.io Jul 2 00:30:00.149636 containerd[1459]: time="2024-07-02T00:30:00.149553339Z" level=warning msg="cleaning up after shim disconnected" id=4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf namespace=k8s.io Jul 2 00:30:00.149636 containerd[1459]: time="2024-07-02T00:30:00.149562376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:00.155366 containerd[1459]: time="2024-07-02T00:30:00.155325650Z" level=info msg="StopContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" returns successfully" Jul 2 00:30:00.155804 containerd[1459]: time="2024-07-02T00:30:00.155772793Z" level=info msg="StopPodSandbox for \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\"" Jul 2 00:30:00.155862 containerd[1459]: time="2024-07-02T00:30:00.155822328Z" level=info msg="Container to stop \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.155862 containerd[1459]: time="2024-07-02T00:30:00.155858557Z" level=info msg="Container to stop \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.155910 containerd[1459]: time="2024-07-02T00:30:00.155867375Z" level=info msg="Container to stop \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.155910 containerd[1459]: time="2024-07-02T00:30:00.155876853Z" level=info msg="Container to stop \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.155910 containerd[1459]: time="2024-07-02T00:30:00.155885158Z" level=info msg="Container to stop \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:30:00.161558 systemd[1]: cri-containerd-aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29.scope: Deactivated successfully. Jul 2 00:30:00.163253 containerd[1459]: time="2024-07-02T00:30:00.163208784Z" level=info msg="TearDown network for sandbox \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\" successfully" Jul 2 00:30:00.163318 containerd[1459]: time="2024-07-02T00:30:00.163253720Z" level=info msg="StopPodSandbox for \"4056e71be779b85ea79103024fd8573e9117016acc954d4a91878654e6fa9baf\" returns successfully" Jul 2 00:30:00.187600 containerd[1459]: time="2024-07-02T00:30:00.187529962Z" level=info msg="shim disconnected" id=aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29 namespace=k8s.io Jul 2 00:30:00.187600 containerd[1459]: time="2024-07-02T00:30:00.187588003Z" level=warning msg="cleaning up after shim disconnected" id=aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29 namespace=k8s.io Jul 2 00:30:00.187600 containerd[1459]: time="2024-07-02T00:30:00.187598312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:00.201934 containerd[1459]: time="2024-07-02T00:30:00.201871930Z" level=info msg="TearDown network for sandbox \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" successfully" Jul 2 00:30:00.201934 containerd[1459]: time="2024-07-02T00:30:00.201909782Z" level=info msg="StopPodSandbox for \"aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29\" returns successfully" Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321590 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-lib-modules\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321628 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3bb11d0-6216-43fc-b37b-11caa1099265-cilium-config-path\") pod \"e3bb11d0-6216-43fc-b37b-11caa1099265\" (UID: \"e3bb11d0-6216-43fc-b37b-11caa1099265\") " Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321644 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-net\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321659 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svghq\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-kube-api-access-svghq\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321672 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-bpf-maps\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.321706 kubelet[2596]: I0702 00:30:00.321686 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-hubble-tls\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321698 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cni-path\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321697 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321711 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-etc-cni-netd\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321724 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-hostproc\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321739 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-cgroup\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322206 kubelet[2596]: I0702 00:30:00.321752 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-run\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321734 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321765 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-xtables-lock\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321779 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-kernel\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321793 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3149db9c-7900-459d-892a-d7bf357fc1d6-clustermesh-secrets\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321807 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6gx8\" (UniqueName: \"kubernetes.io/projected/e3bb11d0-6216-43fc-b37b-11caa1099265-kube-api-access-k6gx8\") pod \"e3bb11d0-6216-43fc-b37b-11caa1099265\" (UID: \"e3bb11d0-6216-43fc-b37b-11caa1099265\") " Jul 2 00:30:00.322387 kubelet[2596]: I0702 00:30:00.321821 2596 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-config-path\") pod \"3149db9c-7900-459d-892a-d7bf357fc1d6\" (UID: \"3149db9c-7900-459d-892a-d7bf357fc1d6\") " Jul 2 00:30:00.322526 kubelet[2596]: I0702 00:30:00.321844 2596 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.322526 kubelet[2596]: I0702 00:30:00.321853 2596 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.324641 kubelet[2596]: I0702 00:30:00.324622 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.324785 kubelet[2596]: I0702 00:30:00.324695 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cni-path" (OuterVolumeSpecName: "cni-path") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.324785 kubelet[2596]: I0702 00:30:00.324712 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.324785 kubelet[2596]: I0702 00:30:00.324727 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-hostproc" (OuterVolumeSpecName: "hostproc") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.325176 kubelet[2596]: I0702 00:30:00.324878 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.325176 kubelet[2596]: I0702 00:30:00.324902 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.325176 kubelet[2596]: I0702 00:30:00.324918 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.325176 kubelet[2596]: I0702 00:30:00.324932 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:30:00.325674 kubelet[2596]: I0702 00:30:00.325650 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:30:00.326095 kubelet[2596]: I0702 00:30:00.326065 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3bb11d0-6216-43fc-b37b-11caa1099265-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3bb11d0-6216-43fc-b37b-11caa1099265" (UID: "e3bb11d0-6216-43fc-b37b-11caa1099265"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:30:00.327298 kubelet[2596]: I0702 00:30:00.327267 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-kube-api-access-svghq" (OuterVolumeSpecName: "kube-api-access-svghq") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "kube-api-access-svghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:00.327370 kubelet[2596]: I0702 00:30:00.327306 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:00.327868 kubelet[2596]: I0702 00:30:00.327847 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3149db9c-7900-459d-892a-d7bf357fc1d6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3149db9c-7900-459d-892a-d7bf357fc1d6" (UID: "3149db9c-7900-459d-892a-d7bf357fc1d6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:30:00.328389 kubelet[2596]: I0702 00:30:00.328364 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bb11d0-6216-43fc-b37b-11caa1099265-kube-api-access-k6gx8" (OuterVolumeSpecName: "kube-api-access-k6gx8") pod "e3bb11d0-6216-43fc-b37b-11caa1099265" (UID: "e3bb11d0-6216-43fc-b37b-11caa1099265"). InnerVolumeSpecName "kube-api-access-k6gx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:30:00.422912 kubelet[2596]: I0702 00:30:00.422876 2596 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.422912 kubelet[2596]: I0702 00:30:00.422899 2596 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.422912 kubelet[2596]: I0702 00:30:00.422908 2596 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.422912 kubelet[2596]: I0702 00:30:00.422918 2596 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422926 2596 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422935 2596 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3149db9c-7900-459d-892a-d7bf357fc1d6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422943 2596 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422952 2596 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422960 2596 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k6gx8\" (UniqueName: \"kubernetes.io/projected/e3bb11d0-6216-43fc-b37b-11caa1099265-kube-api-access-k6gx8\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422969 2596 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3149db9c-7900-459d-892a-d7bf357fc1d6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422977 2596 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3bb11d0-6216-43fc-b37b-11caa1099265-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423086 kubelet[2596]: I0702 00:30:00.422985 2596 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-svghq\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-kube-api-access-svghq\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423305 kubelet[2596]: I0702 00:30:00.422992 2596 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3149db9c-7900-459d-892a-d7bf357fc1d6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.423305 kubelet[2596]: I0702 00:30:00.423009 2596 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3149db9c-7900-459d-892a-d7bf357fc1d6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:30:00.651391 systemd[1]: Removed slice kubepods-besteffort-pode3bb11d0_6216_43fc_b37b_11caa1099265.slice - libcontainer container kubepods-besteffort-pode3bb11d0_6216_43fc_b37b_11caa1099265.slice. Jul 2 00:30:00.652959 systemd[1]: Removed slice kubepods-burstable-pod3149db9c_7900_459d_892a_d7bf357fc1d6.slice - libcontainer container kubepods-burstable-pod3149db9c_7900_459d_892a_d7bf357fc1d6.slice. Jul 2 00:30:00.653059 systemd[1]: kubepods-burstable-pod3149db9c_7900_459d_892a_d7bf357fc1d6.slice: Consumed 6.913s CPU time. Jul 2 00:30:00.841591 kubelet[2596]: I0702 00:30:00.841567 2596 scope.go:117] "RemoveContainer" containerID="409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5" Jul 2 00:30:00.842932 containerd[1459]: time="2024-07-02T00:30:00.842836468Z" level=info msg="RemoveContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\"" Jul 2 00:30:00.847168 containerd[1459]: time="2024-07-02T00:30:00.847134131Z" level=info msg="RemoveContainer for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" returns successfully" Jul 2 00:30:00.847365 kubelet[2596]: I0702 00:30:00.847346 2596 scope.go:117] "RemoveContainer" containerID="409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5" Jul 2 00:30:00.847554 containerd[1459]: time="2024-07-02T00:30:00.847516922Z" level=error msg="ContainerStatus for \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\": not found" Jul 2 00:30:00.856435 kubelet[2596]: E0702 00:30:00.856404 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\": not found" containerID="409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5" Jul 2 00:30:00.856881 kubelet[2596]: I0702 00:30:00.856662 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5"} err="failed to get container status \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"409cda35a16235dcafed224f894e21f57a88fd8893006fdb544e66f53144c2d5\": not found" Jul 2 00:30:00.857266 kubelet[2596]: I0702 00:30:00.857044 2596 scope.go:117] "RemoveContainer" containerID="fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e" Jul 2 00:30:00.858357 containerd[1459]: time="2024-07-02T00:30:00.858317572Z" level=info msg="RemoveContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\"" Jul 2 00:30:00.861660 containerd[1459]: time="2024-07-02T00:30:00.861640232Z" level=info msg="RemoveContainer for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" returns successfully" Jul 2 00:30:00.861785 kubelet[2596]: I0702 00:30:00.861769 2596 scope.go:117] "RemoveContainer" containerID="105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d" Jul 2 00:30:00.862600 containerd[1459]: time="2024-07-02T00:30:00.862569448Z" level=info msg="RemoveContainer for \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\"" Jul 2 00:30:00.865747 containerd[1459]: time="2024-07-02T00:30:00.865723976Z" level=info msg="RemoveContainer for \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\" returns successfully" Jul 2 00:30:00.865905 kubelet[2596]: I0702 00:30:00.865883 2596 scope.go:117] "RemoveContainer" containerID="cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758" Jul 2 00:30:00.866688 containerd[1459]: time="2024-07-02T00:30:00.866658753Z" level=info msg="RemoveContainer for \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\"" Jul 2 00:30:00.869651 containerd[1459]: time="2024-07-02T00:30:00.869629580Z" level=info msg="RemoveContainer for \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\" returns successfully" Jul 2 00:30:00.869783 kubelet[2596]: I0702 00:30:00.869764 2596 scope.go:117] "RemoveContainer" containerID="52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6" Jul 2 00:30:00.870644 containerd[1459]: time="2024-07-02T00:30:00.870618811Z" level=info msg="RemoveContainer for \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\"" Jul 2 00:30:00.873652 containerd[1459]: time="2024-07-02T00:30:00.873621639Z" level=info msg="RemoveContainer for \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\" returns successfully" Jul 2 00:30:00.873753 kubelet[2596]: I0702 00:30:00.873733 2596 scope.go:117] "RemoveContainer" containerID="d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9" Jul 2 00:30:00.874502 containerd[1459]: time="2024-07-02T00:30:00.874480600Z" level=info msg="RemoveContainer for \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\"" Jul 2 00:30:00.879248 containerd[1459]: time="2024-07-02T00:30:00.877826615Z" level=info msg="RemoveContainer for \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\" returns successfully" Jul 2 00:30:00.879468 kubelet[2596]: I0702 00:30:00.879387 2596 scope.go:117] "RemoveContainer" containerID="fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e" Jul 2 00:30:00.879628 containerd[1459]: time="2024-07-02T00:30:00.879576839Z" level=error msg="ContainerStatus for \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\": not found" Jul 2 00:30:00.879924 kubelet[2596]: E0702 00:30:00.879835 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\": not found" containerID="fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e" Jul 2 00:30:00.879924 kubelet[2596]: I0702 00:30:00.879865 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e"} err="failed to get container status \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb2aa4438c7f3d9d2f5979d4348fc27d232766f8568986f7339b25654b12e77e\": not found" Jul 2 00:30:00.879924 kubelet[2596]: I0702 00:30:00.879887 2596 scope.go:117] "RemoveContainer" containerID="105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d" Jul 2 00:30:00.880055 containerd[1459]: time="2024-07-02T00:30:00.880034634Z" level=error msg="ContainerStatus for \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\": not found" Jul 2 00:30:00.880176 kubelet[2596]: E0702 00:30:00.880151 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\": not found" containerID="105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d" Jul 2 00:30:00.880216 kubelet[2596]: I0702 00:30:00.880183 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d"} err="failed to get container status \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\": rpc error: code = NotFound desc = an error occurred when try to find container \"105c2a059c8e602b5f0b709c670fed3a29a87620b171babcabc8b9dd9503883d\": not found" Jul 2 00:30:00.880270 kubelet[2596]: I0702 00:30:00.880216 2596 scope.go:117] "RemoveContainer" containerID="cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758" Jul 2 00:30:00.880404 containerd[1459]: time="2024-07-02T00:30:00.880367500Z" level=error msg="ContainerStatus for \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\": not found" Jul 2 00:30:00.880474 kubelet[2596]: E0702 00:30:00.880453 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\": not found" containerID="cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758" Jul 2 00:30:00.880507 kubelet[2596]: I0702 00:30:00.880477 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758"} err="failed to get container status \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\": rpc error: code = NotFound desc = an error occurred when try to find container \"cab72d64c67ec7cfe0b7e7ed38f6d509d3b6aadccec62914637d4a8b46d53758\": not found" Jul 2 00:30:00.880507 kubelet[2596]: I0702 00:30:00.880491 2596 scope.go:117] "RemoveContainer" containerID="52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6" Jul 2 00:30:00.880669 containerd[1459]: time="2024-07-02T00:30:00.880629701Z" level=error msg="ContainerStatus for \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\": not found" Jul 2 00:30:00.880759 kubelet[2596]: E0702 00:30:00.880736 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\": not found" containerID="52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6" Jul 2 00:30:00.880788 kubelet[2596]: I0702 00:30:00.880761 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6"} err="failed to get container status \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"52b36e967e09269754a01ffd19ef0db88b54b26d36d34b0ba84a7a799eef65b6\": not found" Jul 2 00:30:00.880788 kubelet[2596]: I0702 00:30:00.880777 2596 scope.go:117] "RemoveContainer" containerID="d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9" Jul 2 00:30:00.880940 containerd[1459]: time="2024-07-02T00:30:00.880909967Z" level=error msg="ContainerStatus for \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\": not found" Jul 2 00:30:00.881046 kubelet[2596]: E0702 00:30:00.881025 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\": not found" containerID="d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9" Jul 2 00:30:00.881101 kubelet[2596]: I0702 00:30:00.881049 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9"} err="failed to get container status \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d74bfe42b8ede0451bf0bdf34678e30590f2d3cc434d86750b8b90ca928f9af9\": not found" Jul 2 00:30:01.043113 systemd[1]: var-lib-kubelet-pods-e3bb11d0\x2d6216\x2d43fc\x2db37b\x2d11caa1099265-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6gx8.mount: Deactivated successfully. Jul 2 00:30:01.043250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29-rootfs.mount: Deactivated successfully. Jul 2 00:30:01.043354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aeabb137c69c546dfa8786067c47dc6030235c9e05e00548f2ec5e364f03be29-shm.mount: Deactivated successfully. Jul 2 00:30:01.043458 systemd[1]: var-lib-kubelet-pods-3149db9c\x2d7900\x2d459d\x2d892a\x2dd7bf357fc1d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsvghq.mount: Deactivated successfully. Jul 2 00:30:01.043570 systemd[1]: var-lib-kubelet-pods-3149db9c\x2d7900\x2d459d\x2d892a\x2dd7bf357fc1d6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:30:01.043660 systemd[1]: var-lib-kubelet-pods-3149db9c\x2d7900\x2d459d\x2d892a\x2dd7bf357fc1d6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:30:01.678399 kubelet[2596]: E0702 00:30:01.678355 2596 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:30:02.007495 sshd[4246]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:02.015906 systemd[1]: sshd@26-10.0.0.160:22-10.0.0.1:45836.service: Deactivated successfully. Jul 2 00:30:02.017589 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:30:02.018943 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:30:02.026517 systemd[1]: Started sshd@27-10.0.0.160:22-10.0.0.1:45840.service - OpenSSH per-connection server daemon (10.0.0.1:45840). Jul 2 00:30:02.027340 systemd-logind[1439]: Removed session 27. Jul 2 00:30:02.055568 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:30:02.056907 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:02.060407 systemd-logind[1439]: New session 28 of user core. Jul 2 00:30:02.074348 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:30:02.640591 kubelet[2596]: E0702 00:30:02.640556 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:02.643977 kubelet[2596]: I0702 00:30:02.643896 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" path="/var/lib/kubelet/pods/3149db9c-7900-459d-892a-d7bf357fc1d6/volumes" Jul 2 00:30:02.644901 kubelet[2596]: I0702 00:30:02.644871 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bb11d0-6216-43fc-b37b-11caa1099265" path="/var/lib/kubelet/pods/e3bb11d0-6216-43fc-b37b-11caa1099265/volumes" Jul 2 00:30:02.695746 sshd[4407]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:02.697822 kubelet[2596]: I0702 00:30:02.697683 2596 topology_manager.go:215] "Topology Admit Handler" podUID="247422da-02f2-4d25-b7fc-4b3043ad1883" podNamespace="kube-system" podName="cilium-rhzdd" Jul 2 00:30:02.697822 kubelet[2596]: E0702 00:30:02.697790 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="mount-cgroup" Jul 2 00:30:02.697822 kubelet[2596]: E0702 00:30:02.697800 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="clean-cilium-state" Jul 2 00:30:02.697822 kubelet[2596]: E0702 00:30:02.697808 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="cilium-agent" Jul 2 00:30:02.697822 kubelet[2596]: E0702 00:30:02.697818 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="apply-sysctl-overwrites" Jul 2 00:30:02.697822 kubelet[2596]: E0702 00:30:02.697825 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3bb11d0-6216-43fc-b37b-11caa1099265" containerName="cilium-operator" Jul 2 00:30:02.705052 kubelet[2596]: E0702 00:30:02.697833 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="mount-bpf-fs" Jul 2 00:30:02.705052 kubelet[2596]: I0702 00:30:02.697856 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bb11d0-6216-43fc-b37b-11caa1099265" containerName="cilium-operator" Jul 2 00:30:02.705052 kubelet[2596]: I0702 00:30:02.697863 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="3149db9c-7900-459d-892a-d7bf357fc1d6" containerName="cilium-agent" Jul 2 00:30:02.713679 systemd[1]: sshd@27-10.0.0.160:22-10.0.0.1:45840.service: Deactivated successfully. Jul 2 00:30:02.717662 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:30:02.721555 systemd-logind[1439]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:30:02.732773 systemd[1]: Started sshd@28-10.0.0.160:22-10.0.0.1:45850.service - OpenSSH per-connection server daemon (10.0.0.1:45850). Jul 2 00:30:02.734570 systemd-logind[1439]: Removed session 28. Jul 2 00:30:02.736514 kubelet[2596]: I0702 00:30:02.736470 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-bpf-maps\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736829 kubelet[2596]: I0702 00:30:02.736789 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/247422da-02f2-4d25-b7fc-4b3043ad1883-cilium-ipsec-secrets\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736895 kubelet[2596]: I0702 00:30:02.736841 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-etc-cni-netd\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736895 kubelet[2596]: I0702 00:30:02.736863 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/247422da-02f2-4d25-b7fc-4b3043ad1883-clustermesh-secrets\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736939 kubelet[2596]: I0702 00:30:02.736878 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-cilium-run\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736963 kubelet[2596]: I0702 00:30:02.736950 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-hostproc\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.736997 kubelet[2596]: I0702 00:30:02.736966 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-cilium-cgroup\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737075 kubelet[2596]: I0702 00:30:02.736982 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-cni-path\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737107 kubelet[2596]: I0702 00:30:02.737089 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-xtables-lock\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737130 kubelet[2596]: I0702 00:30:02.737116 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/247422da-02f2-4d25-b7fc-4b3043ad1883-cilium-config-path\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737159 kubelet[2596]: I0702 00:30:02.737139 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzbz\" (UniqueName: \"kubernetes.io/projected/247422da-02f2-4d25-b7fc-4b3043ad1883-kube-api-access-btzbz\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737190 kubelet[2596]: I0702 00:30:02.737161 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-host-proc-sys-kernel\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737190 kubelet[2596]: I0702 00:30:02.737178 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/247422da-02f2-4d25-b7fc-4b3043ad1883-hubble-tls\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737231 kubelet[2596]: I0702 00:30:02.737193 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-lib-modules\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.737231 kubelet[2596]: I0702 00:30:02.737209 2596 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/247422da-02f2-4d25-b7fc-4b3043ad1883-host-proc-sys-net\") pod \"cilium-rhzdd\" (UID: \"247422da-02f2-4d25-b7fc-4b3043ad1883\") " pod="kube-system/cilium-rhzdd" Jul 2 00:30:02.738038 systemd[1]: Created slice kubepods-burstable-pod247422da_02f2_4d25_b7fc_4b3043ad1883.slice - libcontainer container kubepods-burstable-pod247422da_02f2_4d25_b7fc_4b3043ad1883.slice. Jul 2 00:30:02.762558 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 45850 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:30:02.763940 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:02.767776 systemd-logind[1439]: New session 29 of user core. Jul 2 00:30:02.779351 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:30:02.830598 sshd[4420]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:02.854098 systemd[1]: sshd@28-10.0.0.160:22-10.0.0.1:45850.service: Deactivated successfully. Jul 2 00:30:02.855818 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:30:02.857615 systemd-logind[1439]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:30:02.866466 systemd[1]: Started sshd@29-10.0.0.160:22-10.0.0.1:45854.service - OpenSSH per-connection server daemon (10.0.0.1:45854). Jul 2 00:30:02.867580 systemd-logind[1439]: Removed session 29. Jul 2 00:30:02.895920 sshd[4435]: Accepted publickey for core from 10.0.0.1 port 45854 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:30:02.897560 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:30:02.901809 systemd-logind[1439]: New session 30 of user core. Jul 2 00:30:02.908463 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:30:03.041561 kubelet[2596]: E0702 00:30:03.041524 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:03.042064 containerd[1459]: time="2024-07-02T00:30:03.042012250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhzdd,Uid:247422da-02f2-4d25-b7fc-4b3043ad1883,Namespace:kube-system,Attempt:0,}" Jul 2 00:30:03.222846 containerd[1459]: time="2024-07-02T00:30:03.222271933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:30:03.222846 containerd[1459]: time="2024-07-02T00:30:03.222807739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:30:03.222846 containerd[1459]: time="2024-07-02T00:30:03.222823240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:30:03.222846 containerd[1459]: time="2024-07-02T00:30:03.222832777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:30:03.243387 systemd[1]: Started cri-containerd-d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d.scope - libcontainer container d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d. Jul 2 00:30:03.265004 containerd[1459]: time="2024-07-02T00:30:03.264945938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhzdd,Uid:247422da-02f2-4d25-b7fc-4b3043ad1883,Namespace:kube-system,Attempt:0,} returns sandbox id \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\"" Jul 2 00:30:03.265691 kubelet[2596]: E0702 00:30:03.265669 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:03.267614 containerd[1459]: time="2024-07-02T00:30:03.267570184Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:30:03.281466 containerd[1459]: time="2024-07-02T00:30:03.281412255Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2\"" Jul 2 00:30:03.282067 containerd[1459]: time="2024-07-02T00:30:03.281897465Z" level=info msg="StartContainer for \"72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2\"" Jul 2 00:30:03.311365 systemd[1]: Started cri-containerd-72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2.scope - libcontainer container 72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2. Jul 2 00:30:03.335392 containerd[1459]: time="2024-07-02T00:30:03.335353650Z" level=info msg="StartContainer for \"72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2\" returns successfully" Jul 2 00:30:03.343676 systemd[1]: cri-containerd-72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2.scope: Deactivated successfully. Jul 2 00:30:03.375006 containerd[1459]: time="2024-07-02T00:30:03.374939870Z" level=info msg="shim disconnected" id=72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2 namespace=k8s.io Jul 2 00:30:03.375006 containerd[1459]: time="2024-07-02T00:30:03.375000196Z" level=warning msg="cleaning up after shim disconnected" id=72026be354eaff68c5c2ca9b5b536c34349b13166233d97e0320db6efc923af2 namespace=k8s.io Jul 2 00:30:03.375006 containerd[1459]: time="2024-07-02T00:30:03.375009874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:03.852445 kubelet[2596]: E0702 00:30:03.852421 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:03.854328 containerd[1459]: time="2024-07-02T00:30:03.854293833Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:30:03.868645 containerd[1459]: time="2024-07-02T00:30:03.868599122Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308\"" Jul 2 00:30:03.869269 containerd[1459]: time="2024-07-02T00:30:03.868972968Z" level=info msg="StartContainer for \"0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308\"" Jul 2 00:30:03.895364 systemd[1]: Started cri-containerd-0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308.scope - libcontainer container 0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308. Jul 2 00:30:03.919788 containerd[1459]: time="2024-07-02T00:30:03.919740033Z" level=info msg="StartContainer for \"0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308\" returns successfully" Jul 2 00:30:03.925682 systemd[1]: cri-containerd-0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308.scope: Deactivated successfully. Jul 2 00:30:03.949044 containerd[1459]: time="2024-07-02T00:30:03.948986970Z" level=info msg="shim disconnected" id=0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308 namespace=k8s.io Jul 2 00:30:03.949044 containerd[1459]: time="2024-07-02T00:30:03.949042686Z" level=warning msg="cleaning up after shim disconnected" id=0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308 namespace=k8s.io Jul 2 00:30:03.949267 containerd[1459]: time="2024-07-02T00:30:03.949061042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:04.640619 kubelet[2596]: E0702 00:30:04.640580 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:04.846605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0608c9775951a9571e4e9d6b79fb57b569d41dac3ed2d3567f613297321f7308-rootfs.mount: Deactivated successfully. Jul 2 00:30:04.855430 kubelet[2596]: E0702 00:30:04.855391 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:04.857691 containerd[1459]: time="2024-07-02T00:30:04.857387855Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:30:04.878741 containerd[1459]: time="2024-07-02T00:30:04.878675526Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11\"" Jul 2 00:30:04.879429 containerd[1459]: time="2024-07-02T00:30:04.879369105Z" level=info msg="StartContainer for \"9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11\"" Jul 2 00:30:04.913427 systemd[1]: Started cri-containerd-9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11.scope - libcontainer container 9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11. Jul 2 00:30:04.943847 containerd[1459]: time="2024-07-02T00:30:04.943800749Z" level=info msg="StartContainer for \"9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11\" returns successfully" Jul 2 00:30:04.944087 systemd[1]: cri-containerd-9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11.scope: Deactivated successfully. Jul 2 00:30:04.971195 containerd[1459]: time="2024-07-02T00:30:04.971123748Z" level=info msg="shim disconnected" id=9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11 namespace=k8s.io Jul 2 00:30:04.971195 containerd[1459]: time="2024-07-02T00:30:04.971179865Z" level=warning msg="cleaning up after shim disconnected" id=9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11 namespace=k8s.io Jul 2 00:30:04.971195 containerd[1459]: time="2024-07-02T00:30:04.971189203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:05.846488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e71077f106fc3148ff46a3d1fdf567eaf4b43043a32e96bc2328b5095e9cb11-rootfs.mount: Deactivated successfully. Jul 2 00:30:05.858753 kubelet[2596]: E0702 00:30:05.858722 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:05.860913 containerd[1459]: time="2024-07-02T00:30:05.860873204Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:30:05.874265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882890800.mount: Deactivated successfully. Jul 2 00:30:05.875597 containerd[1459]: time="2024-07-02T00:30:05.875546627Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b\"" Jul 2 00:30:05.876070 containerd[1459]: time="2024-07-02T00:30:05.876035155Z" level=info msg="StartContainer for \"78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b\"" Jul 2 00:30:05.904367 systemd[1]: Started cri-containerd-78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b.scope - libcontainer container 78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b. Jul 2 00:30:05.928260 systemd[1]: cri-containerd-78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b.scope: Deactivated successfully. Jul 2 00:30:05.930398 containerd[1459]: time="2024-07-02T00:30:05.930346121Z" level=info msg="StartContainer for \"78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b\" returns successfully" Jul 2 00:30:05.951633 containerd[1459]: time="2024-07-02T00:30:05.951575618Z" level=info msg="shim disconnected" id=78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b namespace=k8s.io Jul 2 00:30:05.951633 containerd[1459]: time="2024-07-02T00:30:05.951627568Z" level=warning msg="cleaning up after shim disconnected" id=78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b namespace=k8s.io Jul 2 00:30:05.951633 containerd[1459]: time="2024-07-02T00:30:05.951635834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:30:06.703155 kubelet[2596]: E0702 00:30:06.703117 2596 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:30:06.846468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78bdcca2e925123adccbaf6e298f611cfc0682850af40d761cbff6d41a8c2f5b-rootfs.mount: Deactivated successfully. Jul 2 00:30:06.862118 kubelet[2596]: E0702 00:30:06.862088 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:06.864527 containerd[1459]: time="2024-07-02T00:30:06.864480595Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:30:06.877495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529480091.mount: Deactivated successfully. Jul 2 00:30:06.879997 containerd[1459]: time="2024-07-02T00:30:06.879958621Z" level=info msg="CreateContainer within sandbox \"d08513d349cfc34a202564681a81845a3fa0a6baf58bbea2d65eedea5223004d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f\"" Jul 2 00:30:06.880484 containerd[1459]: time="2024-07-02T00:30:06.880395219Z" level=info msg="StartContainer for \"fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f\"" Jul 2 00:30:06.910374 systemd[1]: Started cri-containerd-fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f.scope - libcontainer container fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f. Jul 2 00:30:06.937669 containerd[1459]: time="2024-07-02T00:30:06.937625577Z" level=info msg="StartContainer for \"fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f\" returns successfully" Jul 2 00:30:07.340271 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 00:30:07.866214 kubelet[2596]: E0702 00:30:07.866175 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:08.053376 kubelet[2596]: I0702 00:30:08.053323 2596 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:30:08Z","lastTransitionTime":"2024-07-02T00:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:30:09.042946 kubelet[2596]: E0702 00:30:09.042876 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:09.127821 systemd[1]: run-containerd-runc-k8s.io-fb850fc19f24563d8fd16e4a3c28e085f615dddb107e25012bf084f08b002e9f-runc.zNplzY.mount: Deactivated successfully. Jul 2 00:30:10.256752 systemd-networkd[1398]: lxc_health: Link UP Jul 2 00:30:10.261523 systemd-networkd[1398]: lxc_health: Gained carrier Jul 2 00:30:11.044520 kubelet[2596]: E0702 00:30:11.043069 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:11.055932 kubelet[2596]: I0702 00:30:11.055890 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rhzdd" podStartSLOduration=9.055876781 podStartE2EDuration="9.055876781s" podCreationTimestamp="2024-07-02 00:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:30:07.87673449 +0000 UTC m=+91.328057396" watchObservedRunningTime="2024-07-02 00:30:11.055876781 +0000 UTC m=+94.507199687" Jul 2 00:30:11.872057 kubelet[2596]: E0702 00:30:11.872022 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:11.979382 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jul 2 00:30:12.872947 kubelet[2596]: E0702 00:30:12.872919 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:30:17.580308 sshd[4435]: pam_unix(sshd:session): session closed for user core Jul 2 00:30:17.584646 systemd[1]: sshd@29-10.0.0.160:22-10.0.0.1:45854.service: Deactivated successfully. Jul 2 00:30:17.586784 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:30:17.587519 systemd-logind[1439]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:30:17.588510 systemd-logind[1439]: Removed session 30.