Nov 13 08:31:54.042267 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 13 08:31:54.042299 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:31:54.042314 kernel: BIOS-provided physical RAM map: Nov 13 08:31:54.042321 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 08:31:54.042328 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 08:31:54.042334 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 08:31:54.042342 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Nov 13 08:31:54.042349 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Nov 13 08:31:54.042355 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 08:31:54.042365 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 08:31:54.042372 kernel: NX (Execute Disable) protection: active Nov 13 08:31:54.042378 kernel: APIC: Static calls initialized Nov 13 08:31:54.042385 kernel: SMBIOS 2.8 present. Nov 13 08:31:54.042392 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 13 08:31:54.042400 kernel: Hypervisor detected: KVM Nov 13 08:31:54.042410 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 08:31:54.042417 kernel: kvm-clock: using sched offset of 3749208636 cycles Nov 13 08:31:54.042425 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 08:31:54.042433 kernel: tsc: Detected 1995.312 MHz processor Nov 13 08:31:54.042440 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 08:31:54.042448 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 08:31:54.042456 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Nov 13 08:31:54.042463 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 08:31:54.042470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 08:31:54.042481 kernel: ACPI: Early table checksum verification disabled Nov 13 08:31:54.042488 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Nov 13 08:31:54.042495 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042503 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042510 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042517 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 13 08:31:54.042524 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042531 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042538 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042548 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:31:54.042555 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 13 08:31:54.042562 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 13 08:31:54.042569 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 13 08:31:54.042576 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 13 08:31:54.042583 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 13 08:31:54.042590 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 13 08:31:54.042604 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 13 08:31:54.042612 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 08:31:54.042619 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 08:31:54.042627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 08:31:54.042634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 08:31:54.042642 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Nov 13 08:31:54.042649 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Nov 13 08:31:54.042659 kernel: Zone ranges: Nov 13 08:31:54.042667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 08:31:54.042674 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Nov 13 08:31:54.042682 kernel: Normal empty Nov 13 08:31:54.042689 kernel: Movable zone start for each node Nov 13 08:31:54.042697 kernel: Early memory node ranges Nov 13 08:31:54.042704 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 08:31:54.042711 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Nov 13 08:31:54.042719 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Nov 13 08:31:54.042729 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 08:31:54.042736 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 08:31:54.042744 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Nov 13 08:31:54.042752 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 08:31:54.042759 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 08:31:54.042767 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 08:31:54.042774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 08:31:54.042784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 08:31:54.042796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 08:31:54.043413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 08:31:54.043423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 08:31:54.043431 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 08:31:54.043439 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 13 08:31:54.043448 kernel: TSC deadline timer available Nov 13 08:31:54.043455 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 13 08:31:54.043463 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 08:31:54.043471 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 13 08:31:54.043479 kernel: Booting paravirtualized kernel on KVM Nov 13 08:31:54.043490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 08:31:54.043498 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 13 08:31:54.043505 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 13 08:31:54.043513 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 13 08:31:54.043521 kernel: pcpu-alloc: [0] 0 1 Nov 13 08:31:54.043529 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 13 08:31:54.043539 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:31:54.043547 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 08:31:54.043557 kernel: random: crng init done Nov 13 08:31:54.043565 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 08:31:54.043573 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 08:31:54.043580 kernel: Fallback order for Node 0: 0 Nov 13 08:31:54.043588 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Nov 13 08:31:54.043596 kernel: Policy zone: DMA32 Nov 13 08:31:54.043603 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 08:31:54.043611 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 125148K reserved, 0K cma-reserved) Nov 13 08:31:54.043619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 13 08:31:54.043629 kernel: Kernel/User page tables isolation: enabled Nov 13 08:31:54.043638 kernel: ftrace: allocating 37801 entries in 148 pages Nov 13 08:31:54.043646 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 08:31:54.043654 kernel: Dynamic Preempt: voluntary Nov 13 08:31:54.043661 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 08:31:54.043671 kernel: rcu: RCU event tracing is enabled. Nov 13 08:31:54.043679 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 13 08:31:54.043686 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 08:31:54.043694 kernel: Rude variant of Tasks RCU enabled. Nov 13 08:31:54.043705 kernel: Tracing variant of Tasks RCU enabled. Nov 13 08:31:54.043713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 08:31:54.043721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 13 08:31:54.043729 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 13 08:31:54.043737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 08:31:54.043745 kernel: Console: colour VGA+ 80x25 Nov 13 08:31:54.043752 kernel: printk: console [tty0] enabled Nov 13 08:31:54.043760 kernel: printk: console [ttyS0] enabled Nov 13 08:31:54.043768 kernel: ACPI: Core revision 20230628 Nov 13 08:31:54.043776 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 13 08:31:54.043787 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 08:31:54.043794 kernel: x2apic enabled Nov 13 08:31:54.043823 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 08:31:54.043831 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 13 08:31:54.043839 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 13 08:31:54.043847 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 13 08:31:54.043854 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 08:31:54.043862 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 08:31:54.043882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 08:31:54.043891 kernel: Spectre V2 : Mitigation: Retpolines Nov 13 08:31:54.043899 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 08:31:54.043910 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 13 08:31:54.043918 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 13 08:31:54.043926 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 08:31:54.043935 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 08:31:54.043943 kernel: MDS: Mitigation: Clear CPU buffers Nov 13 08:31:54.043951 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 13 08:31:54.043963 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 08:31:54.043971 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 08:31:54.043979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 08:31:54.043987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 08:31:54.043996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 13 08:31:54.044004 kernel: Freeing SMP alternatives memory: 32K Nov 13 08:31:54.044012 kernel: pid_max: default: 32768 minimum: 301 Nov 13 08:31:54.044020 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 08:31:54.044032 kernel: landlock: Up and running. Nov 13 08:31:54.044040 kernel: SELinux: Initializing. Nov 13 08:31:54.044048 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:31:54.044057 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:31:54.044065 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 13 08:31:54.044074 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:31:54.044082 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:31:54.044090 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:31:54.044101 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 13 08:31:54.044110 kernel: signal: max sigframe size: 1776 Nov 13 08:31:54.044119 kernel: rcu: Hierarchical SRCU implementation. Nov 13 08:31:54.044127 kernel: rcu: Max phase no-delay instances is 400. Nov 13 08:31:54.044141 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 08:31:54.044149 kernel: smp: Bringing up secondary CPUs ... Nov 13 08:31:54.044158 kernel: smpboot: x86: Booting SMP configuration: Nov 13 08:31:54.044166 kernel: .... node #0, CPUs: #1 Nov 13 08:31:54.044175 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 08:31:54.044186 kernel: smpboot: Max logical packages: 1 Nov 13 08:31:54.044194 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 13 08:31:54.044202 kernel: devtmpfs: initialized Nov 13 08:31:54.044210 kernel: x86/mm: Memory block size: 128MB Nov 13 08:31:54.044219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 08:31:54.044227 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 13 08:31:54.044235 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 08:31:54.044243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 08:31:54.044251 kernel: audit: initializing netlink subsys (disabled) Nov 13 08:31:54.044260 kernel: audit: type=2000 audit(1731486712.698:1): state=initialized audit_enabled=0 res=1 Nov 13 08:31:54.044271 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 08:31:54.044279 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 08:31:54.044287 kernel: cpuidle: using governor menu Nov 13 08:31:54.044295 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 08:31:54.044304 kernel: dca service started, version 1.12.1 Nov 13 08:31:54.044312 kernel: PCI: Using configuration type 1 for base access Nov 13 08:31:54.044320 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 08:31:54.044328 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 08:31:54.044339 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 08:31:54.044347 kernel: ACPI: Added _OSI(Module Device) Nov 13 08:31:54.044356 kernel: ACPI: Added _OSI(Processor Device) Nov 13 08:31:54.044364 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 08:31:54.044372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 08:31:54.044381 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 08:31:54.044389 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 08:31:54.044397 kernel: ACPI: Interpreter enabled Nov 13 08:31:54.044405 kernel: ACPI: PM: (supports S0 S5) Nov 13 08:31:54.044413 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 08:31:54.044425 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 08:31:54.044433 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 08:31:54.044441 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 13 08:31:54.044450 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 08:31:54.044678 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 13 08:31:54.045027 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 13 08:31:54.045252 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 13 08:31:54.045273 kernel: acpiphp: Slot [3] registered Nov 13 08:31:54.045282 kernel: acpiphp: Slot [4] registered Nov 13 08:31:54.045290 kernel: acpiphp: Slot [5] registered Nov 13 08:31:54.045299 kernel: acpiphp: Slot [6] registered Nov 13 08:31:54.045307 kernel: acpiphp: Slot [7] registered Nov 13 08:31:54.045316 kernel: acpiphp: Slot [8] registered Nov 13 08:31:54.045324 kernel: acpiphp: Slot [9] registered Nov 13 08:31:54.045332 kernel: acpiphp: Slot [10] registered Nov 13 08:31:54.045998 kernel: acpiphp: Slot [11] registered Nov 13 08:31:54.046027 kernel: acpiphp: Slot [12] registered Nov 13 08:31:54.046036 kernel: acpiphp: Slot [13] registered Nov 13 08:31:54.046044 kernel: acpiphp: Slot [14] registered Nov 13 08:31:54.046053 kernel: acpiphp: Slot [15] registered Nov 13 08:31:54.046061 kernel: acpiphp: Slot [16] registered Nov 13 08:31:54.046069 kernel: acpiphp: Slot [17] registered Nov 13 08:31:54.046077 kernel: acpiphp: Slot [18] registered Nov 13 08:31:54.046086 kernel: acpiphp: Slot [19] registered Nov 13 08:31:54.046094 kernel: acpiphp: Slot [20] registered Nov 13 08:31:54.046102 kernel: acpiphp: Slot [21] registered Nov 13 08:31:54.046113 kernel: acpiphp: Slot [22] registered Nov 13 08:31:54.046122 kernel: acpiphp: Slot [23] registered Nov 13 08:31:54.046130 kernel: acpiphp: Slot [24] registered Nov 13 08:31:54.046138 kernel: acpiphp: Slot [25] registered Nov 13 08:31:54.046146 kernel: acpiphp: Slot [26] registered Nov 13 08:31:54.046154 kernel: acpiphp: Slot [27] registered Nov 13 08:31:54.046162 kernel: acpiphp: Slot [28] registered Nov 13 08:31:54.046170 kernel: acpiphp: Slot [29] registered Nov 13 08:31:54.046179 kernel: acpiphp: Slot [30] registered Nov 13 08:31:54.046189 kernel: acpiphp: Slot [31] registered Nov 13 08:31:54.046197 kernel: PCI host bridge to bus 0000:00 Nov 13 08:31:54.046357 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 08:31:54.046449 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 08:31:54.046535 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 08:31:54.046620 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 13 08:31:54.046704 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 13 08:31:54.046793 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 08:31:54.046958 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 13 08:31:54.047075 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 13 08:31:54.047206 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 13 08:31:54.047320 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 13 08:31:54.047417 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 13 08:31:54.047533 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 13 08:31:54.047634 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 13 08:31:54.047732 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 13 08:31:54.047856 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 13 08:31:54.047972 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 13 08:31:54.048083 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 13 08:31:54.048187 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 13 08:31:54.048335 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 13 08:31:54.048467 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 13 08:31:54.048570 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 13 08:31:54.048671 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 13 08:31:54.048774 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 13 08:31:54.048962 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 13 08:31:54.049062 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 08:31:54.049184 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:31:54.049284 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 13 08:31:54.049394 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 13 08:31:54.049513 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 13 08:31:54.052070 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:31:54.052195 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 13 08:31:54.052309 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 13 08:31:54.052411 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 13 08:31:54.052527 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 13 08:31:54.052629 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 13 08:31:54.052728 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 13 08:31:54.052931 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 13 08:31:54.053085 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:31:54.053203 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 08:31:54.053298 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 13 08:31:54.053393 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 13 08:31:54.053532 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:31:54.053681 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 13 08:31:54.053849 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 13 08:31:54.053990 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 13 08:31:54.054158 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 13 08:31:54.054279 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 13 08:31:54.054380 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 13 08:31:54.054391 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 08:31:54.054401 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 08:31:54.054410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 08:31:54.054419 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 08:31:54.054433 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 13 08:31:54.054441 kernel: iommu: Default domain type: Translated Nov 13 08:31:54.054450 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 08:31:54.054458 kernel: PCI: Using ACPI for IRQ routing Nov 13 08:31:54.054467 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 08:31:54.054476 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 08:31:54.054484 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Nov 13 08:31:54.054599 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 13 08:31:54.054709 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 13 08:31:54.057036 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 08:31:54.057079 kernel: vgaarb: loaded Nov 13 08:31:54.057089 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 13 08:31:54.057098 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 13 08:31:54.057107 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 08:31:54.057116 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 08:31:54.057131 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 08:31:54.057144 kernel: pnp: PnP ACPI init Nov 13 08:31:54.057158 kernel: pnp: PnP ACPI: found 4 devices Nov 13 08:31:54.057176 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 08:31:54.057186 kernel: NET: Registered PF_INET protocol family Nov 13 08:31:54.057194 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 08:31:54.057204 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 08:31:54.057212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 08:31:54.057227 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 08:31:54.057240 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 08:31:54.057256 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 08:31:54.057268 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:31:54.057281 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:31:54.057290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 08:31:54.057298 kernel: NET: Registered PF_XDP protocol family Nov 13 08:31:54.057476 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 08:31:54.057648 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 08:31:54.057790 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 08:31:54.057956 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 13 08:31:54.058093 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 13 08:31:54.058276 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 13 08:31:54.058397 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 13 08:31:54.058412 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 13 08:31:54.058542 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 34106 usecs Nov 13 08:31:54.058564 kernel: PCI: CLS 0 bytes, default 64 Nov 13 08:31:54.058578 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 08:31:54.058594 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 13 08:31:54.058606 kernel: Initialise system trusted keyrings Nov 13 08:31:54.058620 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 08:31:54.058629 kernel: Key type asymmetric registered Nov 13 08:31:54.058638 kernel: Asymmetric key parser 'x509' registered Nov 13 08:31:54.058653 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 08:31:54.058667 kernel: io scheduler mq-deadline registered Nov 13 08:31:54.058684 kernel: io scheduler kyber registered Nov 13 08:31:54.058696 kernel: io scheduler bfq registered Nov 13 08:31:54.058705 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 08:31:54.058714 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 13 08:31:54.058727 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 13 08:31:54.058740 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 13 08:31:54.058753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 08:31:54.058767 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 08:31:54.058777 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 08:31:54.058785 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 08:31:54.058794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 08:31:54.058822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 13 08:31:54.058960 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 08:31:54.059076 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 08:31:54.059213 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T08:31:53 UTC (1731486713) Nov 13 08:31:54.059348 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 08:31:54.059363 kernel: intel_pstate: CPU model not supported Nov 13 08:31:54.059372 kernel: NET: Registered PF_INET6 protocol family Nov 13 08:31:54.059380 kernel: Segment Routing with IPv6 Nov 13 08:31:54.059389 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 08:31:54.059398 kernel: NET: Registered PF_PACKET protocol family Nov 13 08:31:54.059413 kernel: Key type dns_resolver registered Nov 13 08:31:54.059422 kernel: IPI shorthand broadcast: enabled Nov 13 08:31:54.059431 kernel: sched_clock: Marking stable (1320004261, 183003812)->(1551602955, -48594882) Nov 13 08:31:54.059457 kernel: registered taskstats version 1 Nov 13 08:31:54.059469 kernel: Loading compiled-in X.509 certificates Nov 13 08:31:54.059478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 13 08:31:54.059487 kernel: Key type .fscrypt registered Nov 13 08:31:54.059496 kernel: Key type fscrypt-provisioning registered Nov 13 08:31:54.059504 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 08:31:54.059517 kernel: ima: Allocated hash algorithm: sha1 Nov 13 08:31:54.059526 kernel: ima: No architecture policies found Nov 13 08:31:54.059534 kernel: clk: Disabling unused clocks Nov 13 08:31:54.059542 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 13 08:31:54.059551 kernel: Write protecting the kernel read-only data: 36864k Nov 13 08:31:54.059581 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 13 08:31:54.059592 kernel: Run /init as init process Nov 13 08:31:54.059601 kernel: with arguments: Nov 13 08:31:54.059610 kernel: /init Nov 13 08:31:54.059621 kernel: with environment: Nov 13 08:31:54.059631 kernel: HOME=/ Nov 13 08:31:54.059644 kernel: TERM=linux Nov 13 08:31:54.059655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 08:31:54.059672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:31:54.059688 systemd[1]: Detected virtualization kvm. Nov 13 08:31:54.059702 systemd[1]: Detected architecture x86-64. Nov 13 08:31:54.059718 systemd[1]: Running in initrd. Nov 13 08:31:54.059731 systemd[1]: No hostname configured, using default hostname. Nov 13 08:31:54.059746 systemd[1]: Hostname set to . Nov 13 08:31:54.059756 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:31:54.059765 systemd[1]: Queued start job for default target initrd.target. Nov 13 08:31:54.059774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:31:54.059783 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:31:54.059793 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 08:31:54.059835 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:31:54.059845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 08:31:54.059855 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 08:31:54.059866 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 08:31:54.059875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 08:31:54.059884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:31:54.059893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:31:54.059906 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:31:54.059915 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:31:54.059924 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:31:54.059937 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:31:54.059947 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:31:54.059956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:31:54.059968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 08:31:54.059977 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 08:31:54.059986 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:31:54.059996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:31:54.060006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:31:54.060015 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:31:54.060024 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 08:31:54.060033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:31:54.060046 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 08:31:54.060055 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 08:31:54.060064 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:31:54.060074 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:31:54.060083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:31:54.060095 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 08:31:54.060150 systemd-journald[184]: Collecting audit messages is disabled. Nov 13 08:31:54.060179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:31:54.060188 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 08:31:54.060198 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:31:54.060213 systemd-journald[184]: Journal started Nov 13 08:31:54.060236 systemd-journald[184]: Runtime Journal (/run/log/journal/591e971e231a40d49fc2ffacd4986cb3) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:31:54.059524 systemd-modules-load[185]: Inserted module 'overlay' Nov 13 08:31:54.071846 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:31:54.101932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 08:31:54.104578 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 13 08:31:54.119171 kernel: Bridge firewalling registered Nov 13 08:31:54.117977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:31:54.119555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:31:54.126160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:31:54.140247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:31:54.142394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:31:54.145091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:31:54.154415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:31:54.176433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:31:54.184601 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:31:54.200421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:31:54.201904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:31:54.204354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:31:54.213367 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 08:31:54.244736 systemd-resolved[216]: Positive Trust Anchors: Nov 13 08:31:54.244761 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:31:54.244828 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:31:54.250037 systemd-resolved[216]: Defaulting to hostname 'linux'. Nov 13 08:31:54.253183 dracut-cmdline[219]: dracut-dracut-053 Nov 13 08:31:54.251925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:31:54.252612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:31:54.256372 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:31:54.369888 kernel: SCSI subsystem initialized Nov 13 08:31:54.382843 kernel: Loading iSCSI transport class v2.0-870. Nov 13 08:31:54.396855 kernel: iscsi: registered transport (tcp) Nov 13 08:31:54.427873 kernel: iscsi: registered transport (qla4xxx) Nov 13 08:31:54.427997 kernel: QLogic iSCSI HBA Driver Nov 13 08:31:54.492400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 08:31:54.500233 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 08:31:54.548865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 08:31:54.548980 kernel: device-mapper: uevent: version 1.0.3 Nov 13 08:31:54.550829 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 08:31:54.599900 kernel: raid6: avx2x4 gen() 18670 MB/s Nov 13 08:31:54.617884 kernel: raid6: avx2x2 gen() 16035 MB/s Nov 13 08:31:54.636238 kernel: raid6: avx2x1 gen() 10962 MB/s Nov 13 08:31:54.636380 kernel: raid6: using algorithm avx2x4 gen() 18670 MB/s Nov 13 08:31:54.655285 kernel: raid6: .... xor() 6287 MB/s, rmw enabled Nov 13 08:31:54.655409 kernel: raid6: using avx2x2 recovery algorithm Nov 13 08:31:54.686855 kernel: xor: automatically using best checksumming function avx Nov 13 08:31:54.871882 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 08:31:54.888752 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:31:54.895233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:31:54.914520 systemd-udevd[402]: Using default interface naming scheme 'v255'. Nov 13 08:31:54.919784 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:31:54.930114 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 08:31:54.964943 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 13 08:31:55.007535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:31:55.022302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:31:55.105190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:31:55.115151 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 08:31:55.145996 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 08:31:55.149624 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:31:55.151293 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:31:55.153785 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:31:55.160018 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 08:31:55.186473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:31:55.212875 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 13 08:31:55.280332 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 08:31:55.280356 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 08:31:55.280501 kernel: scsi host0: Virtio SCSI HBA Nov 13 08:31:55.280659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 08:31:55.280673 kernel: GPT:9289727 != 125829119 Nov 13 08:31:55.280683 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 08:31:55.280694 kernel: GPT:9289727 != 125829119 Nov 13 08:31:55.280710 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 08:31:55.280721 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:31:55.280731 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 13 08:31:55.328429 kernel: AVX2 version of gcm_enc/dec engaged. Nov 13 08:31:55.328457 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Nov 13 08:31:55.328610 kernel: AES CTR mode by8 optimization enabled Nov 13 08:31:55.328623 kernel: ACPI: bus type USB registered Nov 13 08:31:55.328634 kernel: usbcore: registered new interface driver usbfs Nov 13 08:31:55.328654 kernel: usbcore: registered new interface driver hub Nov 13 08:31:55.328665 kernel: usbcore: registered new device driver usb Nov 13 08:31:55.328676 kernel: libata version 3.00 loaded. Nov 13 08:31:55.258651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:31:55.462026 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 13 08:31:55.462419 kernel: scsi host1: ata_piix Nov 13 08:31:55.462651 kernel: scsi host2: ata_piix Nov 13 08:31:55.462917 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 13 08:31:55.462941 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 13 08:31:55.462961 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 13 08:31:55.463172 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 13 08:31:55.463359 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 13 08:31:55.463542 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 13 08:31:55.463732 kernel: hub 1-0:1.0: USB hub found Nov 13 08:31:55.464035 kernel: hub 1-0:1.0: 2 ports detected Nov 13 08:31:55.464225 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Nov 13 08:31:55.464247 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (446) Nov 13 08:31:55.258783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:31:55.263200 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:31:55.263757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:31:55.266182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:31:55.266920 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:31:55.275311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:31:55.445084 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 08:31:55.462485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:31:55.467882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:31:55.477764 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 08:31:55.479278 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 08:31:55.484878 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 08:31:55.493121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 08:31:55.495951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:31:55.504976 disk-uuid[530]: Primary Header is updated. Nov 13 08:31:55.504976 disk-uuid[530]: Secondary Entries is updated. Nov 13 08:31:55.504976 disk-uuid[530]: Secondary Header is updated. Nov 13 08:31:55.511083 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:31:55.515863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:31:55.530647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:31:56.519931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:31:56.521589 disk-uuid[531]: The operation has completed successfully. Nov 13 08:31:56.570304 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 08:31:56.570463 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 08:31:56.587248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 08:31:56.604381 sh[560]: Success Nov 13 08:31:56.623140 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 13 08:31:56.700298 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 08:31:56.703984 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 08:31:56.706919 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 08:31:56.738335 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 13 08:31:56.738428 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:31:56.739896 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 08:31:56.742357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 08:31:56.742414 kernel: BTRFS info (device dm-0): using free space tree Nov 13 08:31:56.751430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 08:31:56.753117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 08:31:56.759111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 08:31:56.764158 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 08:31:56.778903 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:31:56.781883 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:31:56.781963 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:31:56.791849 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:31:56.808362 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:31:56.808014 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 08:31:56.818201 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 08:31:56.826198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 08:31:56.911613 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:31:56.923219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:31:56.955520 systemd-networkd[745]: lo: Link UP Nov 13 08:31:56.955533 systemd-networkd[745]: lo: Gained carrier Nov 13 08:31:56.958354 systemd-networkd[745]: Enumeration completed Nov 13 08:31:56.958832 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:31:56.958835 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 13 08:31:56.960030 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:31:56.963948 systemd[1]: Reached target network.target - Network. Nov 13 08:31:56.964265 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:31:56.964271 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 08:31:56.965362 systemd-networkd[745]: eth0: Link UP Nov 13 08:31:56.965369 systemd-networkd[745]: eth0: Gained carrier Nov 13 08:31:56.965385 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:31:56.974839 systemd-networkd[745]: eth1: Link UP Nov 13 08:31:56.974849 systemd-networkd[745]: eth1: Gained carrier Nov 13 08:31:56.974866 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:31:56.990914 systemd-networkd[745]: eth0: DHCPv4 address 159.223.193.8/20, gateway 159.223.192.1 acquired from 169.254.169.253 Nov 13 08:31:56.994954 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Nov 13 08:31:57.005263 ignition[654]: Ignition 2.20.0 Nov 13 08:31:57.005278 ignition[654]: Stage: fetch-offline Nov 13 08:31:57.005321 ignition[654]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.005332 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.008928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:31:57.005452 ignition[654]: parsed url from cmdline: "" Nov 13 08:31:57.005456 ignition[654]: no config URL provided Nov 13 08:31:57.005462 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:31:57.005470 ignition[654]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:31:57.005476 ignition[654]: failed to fetch config: resource requires networking Nov 13 08:31:57.005902 ignition[654]: Ignition finished successfully Nov 13 08:31:57.015114 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 08:31:57.033857 ignition[754]: Ignition 2.20.0 Nov 13 08:31:57.033876 ignition[754]: Stage: fetch Nov 13 08:31:57.034129 ignition[754]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.034141 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.034239 ignition[754]: parsed url from cmdline: "" Nov 13 08:31:57.034243 ignition[754]: no config URL provided Nov 13 08:31:57.034248 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:31:57.034256 ignition[754]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:31:57.034282 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 13 08:31:57.050537 ignition[754]: GET result: OK Nov 13 08:31:57.050713 ignition[754]: parsing config with SHA512: cde8c66e86cf0effb527cadc97699298832ad54711ec216be3ecbcdcbe62f554fe662b26b386319bbcbf748e6c6b430b555676ca46ca79c8bbf306a9021c0fe5 Nov 13 08:31:57.066236 unknown[754]: fetched base config from "system" Nov 13 08:31:57.066958 unknown[754]: fetched base config from "system" Nov 13 08:31:57.067562 ignition[754]: fetch: fetch complete Nov 13 08:31:57.066972 unknown[754]: fetched user config from "digitalocean" Nov 13 08:31:57.067571 ignition[754]: fetch: fetch passed Nov 13 08:31:57.067637 ignition[754]: Ignition finished successfully Nov 13 08:31:57.070374 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 08:31:57.077137 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 08:31:57.111256 ignition[761]: Ignition 2.20.0 Nov 13 08:31:57.112084 ignition[761]: Stage: kargs Nov 13 08:31:57.112486 ignition[761]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.112506 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.117009 ignition[761]: kargs: kargs passed Nov 13 08:31:57.117109 ignition[761]: Ignition finished successfully Nov 13 08:31:57.118637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 08:31:57.124163 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 08:31:57.156650 ignition[768]: Ignition 2.20.0 Nov 13 08:31:57.156670 ignition[768]: Stage: disks Nov 13 08:31:57.156981 ignition[768]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.156997 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.161369 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 08:31:57.158679 ignition[768]: disks: disks passed Nov 13 08:31:57.158761 ignition[768]: Ignition finished successfully Nov 13 08:31:57.169445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 08:31:57.170536 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 08:31:57.171854 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:31:57.173205 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:31:57.174728 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:31:57.182079 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 08:31:57.204510 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 13 08:31:57.207608 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 08:31:57.214968 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 08:31:57.359237 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 13 08:31:57.360425 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 08:31:57.362019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 08:31:57.373058 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:31:57.376631 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 08:31:57.380065 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 13 08:31:57.388956 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (784) Nov 13 08:31:57.394712 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:31:57.394789 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:31:57.394824 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:31:57.400174 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 13 08:31:57.405242 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:31:57.403529 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 08:31:57.403587 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:31:57.410235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:31:57.411642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 08:31:57.421252 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 08:31:57.499827 coreos-metadata[786]: Nov 13 08:31:57.498 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:31:57.510879 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 08:31:57.512564 coreos-metadata[787]: Nov 13 08:31:57.512 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:31:57.513850 coreos-metadata[786]: Nov 13 08:31:57.513 INFO Fetch successful Nov 13 08:31:57.520651 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Nov 13 08:31:57.522519 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 13 08:31:57.522653 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 13 08:31:57.527000 coreos-metadata[787]: Nov 13 08:31:57.526 INFO Fetch successful Nov 13 08:31:57.531077 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 08:31:57.533497 coreos-metadata[787]: Nov 13 08:31:57.532 INFO wrote hostname ci-4152.0.0-e-2bf6127ade to /sysroot/etc/hostname Nov 13 08:31:57.534725 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:31:57.538857 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 08:31:57.663723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 08:31:57.670015 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 08:31:57.686994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 08:31:57.698862 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:31:57.723577 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 08:31:57.731990 ignition[907]: INFO : Ignition 2.20.0 Nov 13 08:31:57.731990 ignition[907]: INFO : Stage: mount Nov 13 08:31:57.736014 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.736014 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.736014 ignition[907]: INFO : mount: mount passed Nov 13 08:31:57.736014 ignition[907]: INFO : Ignition finished successfully Nov 13 08:31:57.737429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 08:31:57.738359 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 08:31:57.752053 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 08:31:57.761218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:31:57.789847 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Nov 13 08:31:57.791866 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:31:57.793964 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:31:57.794062 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:31:57.798852 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:31:57.802261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:31:57.833031 ignition[935]: INFO : Ignition 2.20.0 Nov 13 08:31:57.833031 ignition[935]: INFO : Stage: files Nov 13 08:31:57.835247 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:57.835247 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:57.835247 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Nov 13 08:31:57.838330 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 08:31:57.838330 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 08:31:57.840987 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 08:31:57.841885 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 08:31:57.841885 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 08:31:57.841860 unknown[935]: wrote ssh authorized keys file for user: core Nov 13 08:31:57.844929 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 08:31:57.844929 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 13 08:31:57.844929 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:31:57.844929 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 08:31:57.886341 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 13 08:31:57.962514 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:31:57.962514 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:31:57.965241 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 13 08:31:58.107968 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 13 08:31:58.145138 systemd-networkd[745]: eth1: Gained IPv6LL Nov 13 08:31:58.191497 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:31:58.191497 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 08:31:58.193883 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 13 08:31:58.208102 systemd-networkd[745]: eth0: Gained IPv6LL Nov 13 08:31:58.488957 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 13 08:31:58.786852 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 13 08:31:58.786852 ignition[935]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 13 08:31:58.791033 ignition[935]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 08:31:58.800785 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:31:58.800785 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:31:58.800785 ignition[935]: INFO : files: files passed Nov 13 08:31:58.800785 ignition[935]: INFO : Ignition finished successfully Nov 13 08:31:58.793298 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 08:31:58.801627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 08:31:58.807299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 08:31:58.811554 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 08:31:58.811719 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 08:31:58.837833 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:31:58.837833 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:31:58.841878 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:31:58.844385 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:31:58.846457 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 08:31:58.852152 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 08:31:58.911426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 08:31:58.911601 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 08:31:58.913354 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 08:31:58.914646 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 08:31:58.916349 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 08:31:58.921167 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 08:31:58.952200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:31:58.962165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 08:31:58.974820 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:31:58.976852 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:31:58.977633 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 08:31:58.978245 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 08:31:58.978405 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:31:58.980285 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 08:31:58.981167 systemd[1]: Stopped target basic.target - Basic System. Nov 13 08:31:58.982526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 08:31:58.983972 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:31:58.985284 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 08:31:58.986681 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 08:31:58.988118 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:31:58.990053 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 08:31:58.991579 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 08:31:58.992962 systemd[1]: Stopped target swap.target - Swaps. Nov 13 08:31:58.994408 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 08:31:58.994643 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:31:58.996360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:31:58.997258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:31:58.998664 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 08:31:58.998878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:31:59.000053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 08:31:59.000266 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 08:31:59.002029 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 08:31:59.002291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:31:59.003659 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 08:31:59.003834 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 08:31:59.004595 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 13 08:31:59.004728 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:31:59.012391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 08:31:59.014513 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 08:31:59.014867 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:31:59.022307 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 08:31:59.023751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 08:31:59.025115 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:31:59.034261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 08:31:59.034456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:31:59.048198 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 08:31:59.049890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 08:31:59.056668 ignition[988]: INFO : Ignition 2.20.0 Nov 13 08:31:59.056668 ignition[988]: INFO : Stage: umount Nov 13 08:31:59.061694 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:31:59.061694 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:31:59.061694 ignition[988]: INFO : umount: umount passed Nov 13 08:31:59.061694 ignition[988]: INFO : Ignition finished successfully Nov 13 08:31:59.064317 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 08:31:59.064484 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 08:31:59.068630 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 08:31:59.068928 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 08:31:59.070849 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 08:31:59.070947 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 08:31:59.075030 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 08:31:59.075113 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 08:31:59.087959 systemd[1]: Stopped target network.target - Network. Nov 13 08:31:59.089083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 08:31:59.089213 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:31:59.091410 systemd[1]: Stopped target paths.target - Path Units. Nov 13 08:31:59.102566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 08:31:59.103291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:31:59.104019 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 08:31:59.105300 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 08:31:59.106667 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 08:31:59.106741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:31:59.108113 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 08:31:59.108196 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:31:59.118307 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 08:31:59.118416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 08:31:59.119651 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 08:31:59.119744 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 08:31:59.121404 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 08:31:59.123316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 08:31:59.125685 systemd-networkd[745]: eth0: DHCPv6 lease lost Nov 13 08:31:59.143027 systemd-networkd[745]: eth1: DHCPv6 lease lost Nov 13 08:31:59.148929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 08:31:59.149839 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 08:31:59.149989 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 08:31:59.172766 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 08:31:59.173381 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 08:31:59.177310 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 08:31:59.177432 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 08:31:59.181294 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 08:31:59.181370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:31:59.183020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 08:31:59.183145 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 08:31:59.193087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 08:31:59.194747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 08:31:59.197184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:31:59.198128 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:31:59.198211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:31:59.199628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 08:31:59.199706 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 08:31:59.201016 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 08:31:59.201082 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:31:59.203250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:31:59.215957 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 08:31:59.216778 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:31:59.217789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 08:31:59.218565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 08:31:59.219698 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 08:31:59.219765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:31:59.222896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 08:31:59.222990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:31:59.224916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 08:31:59.225007 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 08:31:59.226338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:31:59.226405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:31:59.232107 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 08:31:59.232749 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 08:31:59.232844 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:31:59.234338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:31:59.234411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:31:59.237294 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 08:31:59.237856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 08:31:59.248309 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 08:31:59.248435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 08:31:59.250184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 08:31:59.255147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 08:31:59.271781 systemd[1]: Switching root. Nov 13 08:31:59.337714 systemd-journald[184]: Journal stopped Nov 13 08:32:00.701436 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 13 08:32:00.701610 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 08:32:00.701637 kernel: SELinux: policy capability open_perms=1 Nov 13 08:32:00.701658 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 08:32:00.701685 kernel: SELinux: policy capability always_check_network=0 Nov 13 08:32:00.701702 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 08:32:00.701730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 08:32:00.701748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 08:32:00.701767 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 08:32:00.701791 kernel: audit: type=1403 audit(1731486719.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 08:32:00.701965 systemd[1]: Successfully loaded SELinux policy in 43.554ms. Nov 13 08:32:00.702010 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.070ms. Nov 13 08:32:00.702033 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:32:00.702055 systemd[1]: Detected virtualization kvm. Nov 13 08:32:00.702076 systemd[1]: Detected architecture x86-64. Nov 13 08:32:00.702102 systemd[1]: Detected first boot. Nov 13 08:32:00.702123 systemd[1]: Hostname set to . Nov 13 08:32:00.702150 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:32:00.702172 zram_generator::config[1049]: No configuration found. Nov 13 08:32:00.702192 systemd[1]: Populated /etc with preset unit settings. Nov 13 08:32:00.702210 systemd[1]: Queued start job for default target multi-user.target. Nov 13 08:32:00.702227 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 08:32:00.702248 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 08:32:00.702268 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 08:32:00.702284 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 08:32:00.702312 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 08:32:00.702336 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 08:32:00.702355 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 08:32:00.702372 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 08:32:00.702392 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 08:32:00.702410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:32:00.702426 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:32:00.702444 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 08:32:00.702463 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 08:32:00.702487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 08:32:00.702506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:32:00.702529 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 08:32:00.702547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:32:00.703873 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 08:32:00.703932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:32:00.703958 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:32:00.703984 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:32:00.704005 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:32:00.704024 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 08:32:00.704045 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 08:32:00.704064 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 08:32:00.704084 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 08:32:00.704104 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:32:00.704125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:32:00.704145 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:32:00.704171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 08:32:00.704192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 08:32:00.704212 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 08:32:00.704232 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 08:32:00.704253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:00.704275 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 08:32:00.704298 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 08:32:00.704320 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 08:32:00.704342 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 08:32:00.704368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:32:00.704389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:32:00.704409 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 08:32:00.704432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:32:00.704453 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:32:00.704475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:32:00.704494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 08:32:00.704514 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:32:00.704541 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 08:32:00.704563 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 13 08:32:00.704586 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 13 08:32:00.704607 kernel: fuse: init (API version 7.39) Nov 13 08:32:00.704629 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:32:00.704650 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:32:00.704670 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 08:32:00.704749 systemd-journald[1143]: Collecting audit messages is disabled. Nov 13 08:32:00.704854 kernel: loop: module loaded Nov 13 08:32:00.704875 kernel: ACPI: bus type drm_connector registered Nov 13 08:32:00.704897 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 08:32:00.704919 systemd-journald[1143]: Journal started Nov 13 08:32:00.704963 systemd-journald[1143]: Runtime Journal (/run/log/journal/591e971e231a40d49fc2ffacd4986cb3) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:32:00.721865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:32:00.727695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:00.732912 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:32:00.734896 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 08:32:00.741391 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 08:32:00.742787 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 08:32:00.743688 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 08:32:00.744654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 08:32:00.745495 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 08:32:00.746997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 08:32:00.748340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:32:00.749579 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 08:32:00.749891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 08:32:00.751794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:32:00.752091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:32:00.753391 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:32:00.753717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:32:00.755052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:32:00.755334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:32:00.757032 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 08:32:00.757291 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 08:32:00.758714 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:32:00.761149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:32:00.764841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:32:00.768143 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 08:32:00.772518 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 08:32:00.793646 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 08:32:00.805170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 08:32:00.821986 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 08:32:00.827122 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 08:32:00.846993 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 08:32:00.864180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 08:32:00.868020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:32:00.883217 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 08:32:00.886571 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:32:00.903512 systemd-journald[1143]: Time spent on flushing to /var/log/journal/591e971e231a40d49fc2ffacd4986cb3 is 75.121ms for 974 entries. Nov 13 08:32:00.903512 systemd-journald[1143]: System Journal (/var/log/journal/591e971e231a40d49fc2ffacd4986cb3) is 8.0M, max 195.6M, 187.6M free. Nov 13 08:32:00.994773 systemd-journald[1143]: Received client request to flush runtime journal. Nov 13 08:32:00.905257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:32:00.922219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:32:00.930793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:32:00.933481 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 08:32:00.936603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 08:32:00.939334 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 08:32:00.947634 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 08:32:00.959166 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 08:32:00.995243 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 13 08:32:00.998891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 08:32:01.021154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:32:01.026841 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 13 08:32:01.026873 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 13 08:32:01.035272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:32:01.051366 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 08:32:01.109044 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 08:32:01.118226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:32:01.154527 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 13 08:32:01.154566 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 13 08:32:01.162821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:32:01.992635 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 08:32:02.013661 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:32:02.059195 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Nov 13 08:32:02.150036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:32:02.196177 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:32:02.212602 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 08:32:02.361895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:02.362160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:32:02.370295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:32:02.393086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:32:02.409708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:32:02.416961 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 08:32:02.417056 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 08:32:02.417137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:02.428198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 08:32:02.435512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:32:02.435858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:32:02.453541 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:32:02.453889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:32:02.463930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:32:02.464564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:32:02.467170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:32:02.478575 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 13 08:32:02.484159 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:32:02.490847 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1226) Nov 13 08:32:02.513324 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1226) Nov 13 08:32:02.527097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 13 08:32:02.536253 kernel: ACPI: button: Power Button [PWRF] Nov 13 08:32:02.536397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1234) Nov 13 08:32:02.541936 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 13 08:32:02.593013 systemd-networkd[1222]: lo: Link UP Nov 13 08:32:02.593024 systemd-networkd[1222]: lo: Gained carrier Nov 13 08:32:02.596385 systemd-networkd[1222]: Enumeration completed Nov 13 08:32:02.597004 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:32:02.598867 systemd-networkd[1222]: eth0: Configuring with /run/systemd/network/10-da:cf:c9:7a:38:88.network. Nov 13 08:32:02.602061 systemd-networkd[1222]: eth1: Configuring with /run/systemd/network/10-42:d6:cc:25:6b:84.network. Nov 13 08:32:02.603902 systemd-networkd[1222]: eth0: Link UP Nov 13 08:32:02.603916 systemd-networkd[1222]: eth0: Gained carrier Nov 13 08:32:02.607155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 08:32:02.608272 systemd-networkd[1222]: eth1: Link UP Nov 13 08:32:02.608288 systemd-networkd[1222]: eth1: Gained carrier Nov 13 08:32:02.674851 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 13 08:32:02.722955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:32:02.758852 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 08:32:02.788352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:32:02.805040 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 13 08:32:02.805168 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 13 08:32:02.817840 kernel: Console: switching to colour dummy device 80x25 Nov 13 08:32:02.821560 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 13 08:32:02.821696 kernel: [drm] features: -context_init Nov 13 08:32:02.827928 kernel: [drm] number of scanouts: 1 Nov 13 08:32:02.828068 kernel: [drm] number of cap sets: 0 Nov 13 08:32:02.835840 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 13 08:32:02.842706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:32:02.842998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:32:02.882828 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 13 08:32:02.886577 kernel: Console: switching to colour frame buffer device 128x48 Nov 13 08:32:02.891228 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 13 08:32:02.892625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:32:02.929583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:32:02.930104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:32:02.945223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:32:03.019447 kernel: EDAC MC: Ver: 3.0.0 Nov 13 08:32:03.059880 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 08:32:03.071182 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 08:32:03.093109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:32:03.094378 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:32:03.140531 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 08:32:03.143688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:32:03.154664 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 08:32:03.178904 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:32:03.214059 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 08:32:03.216736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 08:32:03.230037 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 13 08:32:03.230291 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 08:32:03.230348 systemd[1]: Reached target machines.target - Containers. Nov 13 08:32:03.235526 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 08:32:03.259879 kernel: ISO 9660 Extensions: RRIP_1991A Nov 13 08:32:03.266523 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 13 08:32:03.269814 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:32:03.274794 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 08:32:03.284300 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 08:32:03.299254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 08:32:03.299828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:32:03.304084 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 08:32:03.318167 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 08:32:03.325348 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 08:32:03.335839 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 08:32:03.364874 kernel: loop0: detected capacity change from 0 to 211296 Nov 13 08:32:03.392953 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 08:32:03.398085 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 08:32:03.410895 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 08:32:03.441444 kernel: loop1: detected capacity change from 0 to 8 Nov 13 08:32:03.465403 kernel: loop2: detected capacity change from 0 to 140992 Nov 13 08:32:03.519942 kernel: loop3: detected capacity change from 0 to 138184 Nov 13 08:32:03.578021 kernel: loop4: detected capacity change from 0 to 211296 Nov 13 08:32:03.609292 kernel: loop5: detected capacity change from 0 to 8 Nov 13 08:32:03.613793 kernel: loop6: detected capacity change from 0 to 140992 Nov 13 08:32:03.647661 kernel: loop7: detected capacity change from 0 to 138184 Nov 13 08:32:03.665738 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 13 08:32:03.666501 (sd-merge)[1314]: Merged extensions into '/usr'. Nov 13 08:32:03.673480 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 08:32:03.673502 systemd[1]: Reloading... Nov 13 08:32:03.790338 zram_generator::config[1339]: No configuration found. Nov 13 08:32:03.841124 systemd-networkd[1222]: eth1: Gained IPv6LL Nov 13 08:32:03.906906 systemd-networkd[1222]: eth0: Gained IPv6LL Nov 13 08:32:04.081760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:32:04.094946 ldconfig[1299]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 08:32:04.150515 systemd[1]: Reloading finished in 476 ms. Nov 13 08:32:04.170940 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 08:32:04.175011 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 08:32:04.177493 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 08:32:04.190358 systemd[1]: Starting ensure-sysext.service... Nov 13 08:32:04.197237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:32:04.210505 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Nov 13 08:32:04.210533 systemd[1]: Reloading... Nov 13 08:32:04.280510 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 08:32:04.282333 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 08:32:04.285564 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 08:32:04.286368 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Nov 13 08:32:04.286467 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Nov 13 08:32:04.292191 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:32:04.292466 systemd-tmpfiles[1395]: Skipping /boot Nov 13 08:32:04.310793 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:32:04.311087 systemd-tmpfiles[1395]: Skipping /boot Nov 13 08:32:04.358846 zram_generator::config[1425]: No configuration found. Nov 13 08:32:04.548530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:32:04.632187 systemd[1]: Reloading finished in 420 ms. Nov 13 08:32:04.654154 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:32:04.679475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:32:04.694475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 08:32:04.701113 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 08:32:04.721743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:32:04.737644 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 08:32:04.751734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:04.752616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:32:04.759897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:32:04.779291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:32:04.794230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:32:04.796997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:32:04.797251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:04.815672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 08:32:04.828536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:32:04.837337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:32:04.841116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:32:04.841390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:32:04.845188 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:32:04.845476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:32:04.870383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:04.870876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:32:04.881413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:32:04.896904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:32:04.918470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:32:04.921341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:32:04.931317 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 08:32:04.936277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:04.940504 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 08:32:04.947866 augenrules[1516]: No rules Nov 13 08:32:04.951231 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:32:04.956356 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:32:04.962933 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 08:32:04.966417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:32:04.967356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:32:04.970608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:32:04.971149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:32:04.975512 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:32:04.976199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:32:04.995337 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 08:32:05.002060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:05.010094 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:32:05.014650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:32:05.025465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:32:05.038843 systemd-resolved[1477]: Positive Trust Anchors: Nov 13 08:32:05.038873 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:32:05.038926 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:32:05.041624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:32:05.054036 systemd-resolved[1477]: Using system hostname 'ci-4152.0.0-e-2bf6127ade'. Nov 13 08:32:05.064471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:32:05.085139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:32:05.090951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:32:05.091161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 08:32:05.091251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:32:05.092447 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:32:05.102774 augenrules[1534]: /sbin/augenrules: No change Nov 13 08:32:05.104455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:32:05.104648 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:32:05.107891 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:32:05.108175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:32:05.113774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:32:05.114369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:32:05.122261 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:32:05.122506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:32:05.133200 systemd[1]: Finished ensure-sysext.service. Nov 13 08:32:05.137837 augenrules[1559]: No rules Nov 13 08:32:05.142474 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:32:05.142932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:32:05.147858 systemd[1]: Reached target network.target - Network. Nov 13 08:32:05.148606 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 08:32:05.149306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:32:05.154500 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:32:05.154632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:32:05.163266 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 08:32:05.264791 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 08:32:05.266134 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:32:05.267738 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 08:32:05.270983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 08:32:05.274610 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 08:32:05.275650 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 08:32:05.275709 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:32:05.278078 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 08:32:05.279059 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 08:32:05.279781 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 08:32:05.280550 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:32:05.285883 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 08:32:05.290776 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 08:32:05.298492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 08:32:05.301339 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 08:32:05.303380 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:32:05.305466 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:32:05.306550 systemd[1]: System is tainted: cgroupsv1 Nov 13 08:32:05.306653 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:32:05.306689 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:32:05.312996 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 08:32:05.862325 systemd-timesyncd[1573]: Contacted time server 45.79.35.159:123 (0.flatcar.pool.ntp.org). Nov 13 08:32:05.862469 systemd-timesyncd[1573]: Initial clock synchronization to Wed 2024-11-13 08:32:05.862116 UTC. Nov 13 08:32:05.862563 systemd-resolved[1477]: Clock change detected. Flushing caches. Nov 13 08:32:05.871274 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 08:32:05.880303 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 08:32:05.893148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 08:32:05.908340 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 08:32:05.909194 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 08:32:05.925279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:05.928253 jq[1583]: false Nov 13 08:32:05.939233 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 08:32:05.943993 coreos-metadata[1578]: Nov 13 08:32:05.943 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:32:05.958201 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 08:32:05.961744 coreos-metadata[1578]: Nov 13 08:32:05.958 INFO Fetch successful Nov 13 08:32:05.969393 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 08:32:05.978411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 08:32:05.992207 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 08:32:06.000616 dbus-daemon[1580]: [system] SELinux support is enabled Nov 13 08:32:06.014336 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 08:32:06.023045 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 08:32:06.046401 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 08:32:06.058433 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 08:32:06.061468 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 08:32:06.083444 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 08:32:06.083785 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 08:32:06.099005 extend-filesystems[1584]: Found loop4 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found loop5 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found loop6 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found loop7 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda1 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda2 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda3 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found usr Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda4 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda6 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda7 Nov 13 08:32:06.099005 extend-filesystems[1584]: Found vda9 Nov 13 08:32:06.099005 extend-filesystems[1584]: Checking size of /dev/vda9 Nov 13 08:32:06.107617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 08:32:06.242125 update_engine[1602]: I20241113 08:32:06.239761 1602 main.cc:92] Flatcar Update Engine starting Nov 13 08:32:06.108068 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 08:32:06.262588 jq[1606]: true Nov 13 08:32:06.263056 extend-filesystems[1584]: Resized partition /dev/vda9 Nov 13 08:32:06.171888 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 08:32:06.288323 update_engine[1602]: I20241113 08:32:06.279016 1602 update_check_scheduler.cc:74] Next update check in 4m22s Nov 13 08:32:06.172388 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 08:32:06.176337 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 08:32:06.299593 extend-filesystems[1640]: resize2fs 1.47.1 (20-May-2024) Nov 13 08:32:06.322154 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 13 08:32:06.191514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 08:32:06.191573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 08:32:06.195795 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 08:32:06.331424 jq[1625]: true Nov 13 08:32:06.197235 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 13 08:32:06.197417 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 08:32:06.248548 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 08:32:06.264174 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 08:32:06.295701 systemd[1]: Started update-engine.service - Update Engine. Nov 13 08:32:06.345197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 08:32:06.359259 tar[1614]: linux-amd64/helm Nov 13 08:32:06.349226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 08:32:06.355377 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 08:32:06.435557 systemd-logind[1597]: New seat seat0. Nov 13 08:32:06.478434 systemd-logind[1597]: Watching system buttons on /dev/input/event1 (Power Button) Nov 13 08:32:06.478462 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 08:32:06.480105 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 08:32:06.645857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1652) Nov 13 08:32:06.651413 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 08:32:06.654014 bash[1670]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:32:06.658326 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 08:32:06.725144 extend-filesystems[1640]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 08:32:06.725144 extend-filesystems[1640]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 08:32:06.725144 extend-filesystems[1640]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 08:32:06.762496 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Nov 13 08:32:06.762496 extend-filesystems[1584]: Found vdb Nov 13 08:32:06.744478 systemd[1]: Starting sshkeys.service... Nov 13 08:32:06.773541 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 08:32:06.774037 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 08:32:06.841508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 08:32:06.857162 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 08:32:06.965047 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 08:32:06.983851 sshd_keygen[1636]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 08:32:07.045600 coreos-metadata[1690]: Nov 13 08:32:07.044 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:32:07.063437 coreos-metadata[1690]: Nov 13 08:32:07.061 INFO Fetch successful Nov 13 08:32:07.081089 unknown[1690]: wrote ssh authorized keys file for user: core Nov 13 08:32:07.147058 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 08:32:07.171003 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 08:32:07.188128 update-ssh-keys[1702]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:32:07.191339 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 08:32:07.196770 systemd[1]: Finished sshkeys.service. Nov 13 08:32:07.239578 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 08:32:07.240089 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 08:32:07.265074 containerd[1618]: time="2024-11-13T08:32:07.262617945Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 13 08:32:07.268524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 08:32:07.353695 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 08:32:07.374057 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 08:32:07.392388 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 08:32:07.394591 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 08:32:07.425276 containerd[1618]: time="2024-11-13T08:32:07.424120963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429336920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429416235Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429449776Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429739947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429772238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429874239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.429896416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.430275107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.430307095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.430333547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431204 containerd[1618]: time="2024-11-13T08:32:07.430349379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.431783 containerd[1618]: time="2024-11-13T08:32:07.430491290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.434069 containerd[1618]: time="2024-11-13T08:32:07.430930825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:32:07.434855 containerd[1618]: time="2024-11-13T08:32:07.434763683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:32:07.435471 containerd[1618]: time="2024-11-13T08:32:07.435423033Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 08:32:07.437080 containerd[1618]: time="2024-11-13T08:32:07.435938812Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 08:32:07.437080 containerd[1618]: time="2024-11-13T08:32:07.436081385Z" level=info msg="metadata content store policy set" policy=shared Nov 13 08:32:07.453824 containerd[1618]: time="2024-11-13T08:32:07.453753750Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 08:32:07.455992 containerd[1618]: time="2024-11-13T08:32:07.454861795Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 08:32:07.455992 containerd[1618]: time="2024-11-13T08:32:07.455536754Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 08:32:07.455992 containerd[1618]: time="2024-11-13T08:32:07.455578531Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 08:32:07.455992 containerd[1618]: time="2024-11-13T08:32:07.455605992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 08:32:07.455992 containerd[1618]: time="2024-11-13T08:32:07.455905374Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 08:32:07.463495 containerd[1618]: time="2024-11-13T08:32:07.463387324Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 08:32:07.468445 containerd[1618]: time="2024-11-13T08:32:07.467594477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 08:32:07.468445 containerd[1618]: time="2024-11-13T08:32:07.467789867Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469188236Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469253243Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469279882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469303208Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469328276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469367122Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469390091Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469412439Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469433041Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469469230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469491183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469509518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469529775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471101 containerd[1618]: time="2024-11-13T08:32:07.469548079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469569027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469589005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469610864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469637558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469661454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469678759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469695450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469714252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469778727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469818064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469850089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469871245Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.469972815Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 08:32:07.471691 containerd[1618]: time="2024-11-13T08:32:07.470001419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470017999Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470036002Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470052094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470073228Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470091731Z" level=info msg="NRI interface is disabled by configuration." Nov 13 08:32:07.472139 containerd[1618]: time="2024-11-13T08:32:07.470107510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 08:32:07.472328 containerd[1618]: time="2024-11-13T08:32:07.470714769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 08:32:07.472328 containerd[1618]: time="2024-11-13T08:32:07.470865789Z" level=info msg="Connect containerd service" Nov 13 08:32:07.476730 containerd[1618]: time="2024-11-13T08:32:07.476065939Z" level=info msg="using legacy CRI server" Nov 13 08:32:07.476730 containerd[1618]: time="2024-11-13T08:32:07.476128239Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 08:32:07.476730 containerd[1618]: time="2024-11-13T08:32:07.476317315Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.477572772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478184280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478283260Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478369897Z" level=info msg="Start subscribing containerd event" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478442247Z" level=info msg="Start recovering state" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478572590Z" level=info msg="Start event monitor" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478589758Z" level=info msg="Start snapshots syncer" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478606125Z" level=info msg="Start cni network conf syncer for default" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478618912Z" level=info msg="Start streaming server" Nov 13 08:32:07.480162 containerd[1618]: time="2024-11-13T08:32:07.478760188Z" level=info msg="containerd successfully booted in 0.227374s" Nov 13 08:32:07.479332 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 08:32:07.924439 tar[1614]: linux-amd64/LICENSE Nov 13 08:32:07.925408 tar[1614]: linux-amd64/README.md Nov 13 08:32:07.968230 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 08:32:08.403440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:08.408424 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 08:32:08.414615 systemd[1]: Startup finished in 7.284s (kernel) + 8.371s (userspace) = 15.656s. Nov 13 08:32:08.430751 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:32:09.438286 kubelet[1739]: E1113 08:32:09.438083 1739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:32:09.441362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:32:09.441645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:32:14.490043 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 08:32:14.499445 systemd[1]: Started sshd@0-159.223.193.8:22-139.178.89.65:52162.service - OpenSSH per-connection server daemon (139.178.89.65:52162). Nov 13 08:32:14.602518 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 52162 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:14.606161 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:14.623624 systemd-logind[1597]: New session 1 of user core. Nov 13 08:32:14.625577 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 08:32:14.631491 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 08:32:14.667250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 08:32:14.679451 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 08:32:14.697565 (systemd)[1757]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 08:32:14.819840 systemd[1757]: Queued start job for default target default.target. Nov 13 08:32:14.820513 systemd[1757]: Created slice app.slice - User Application Slice. Nov 13 08:32:14.820557 systemd[1757]: Reached target paths.target - Paths. Nov 13 08:32:14.820573 systemd[1757]: Reached target timers.target - Timers. Nov 13 08:32:14.827226 systemd[1757]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 08:32:14.838843 systemd[1757]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 08:32:14.838944 systemd[1757]: Reached target sockets.target - Sockets. Nov 13 08:32:14.838988 systemd[1757]: Reached target basic.target - Basic System. Nov 13 08:32:14.839064 systemd[1757]: Reached target default.target - Main User Target. Nov 13 08:32:14.839100 systemd[1757]: Startup finished in 131ms. Nov 13 08:32:14.839660 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 08:32:14.853778 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 08:32:14.925445 systemd[1]: Started sshd@1-159.223.193.8:22-139.178.89.65:52168.service - OpenSSH per-connection server daemon (139.178.89.65:52168). Nov 13 08:32:14.994372 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 52168 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:14.996919 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.004901 systemd-logind[1597]: New session 2 of user core. Nov 13 08:32:15.012629 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 08:32:15.085852 sshd[1772]: Connection closed by 139.178.89.65 port 52168 Nov 13 08:32:15.085465 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:15.103601 systemd[1]: Started sshd@2-159.223.193.8:22-139.178.89.65:52184.service - OpenSSH per-connection server daemon (139.178.89.65:52184). Nov 13 08:32:15.104309 systemd[1]: sshd@1-159.223.193.8:22-139.178.89.65:52168.service: Deactivated successfully. Nov 13 08:32:15.106434 systemd[1]: session-2.scope: Deactivated successfully. Nov 13 08:32:15.108277 systemd-logind[1597]: Session 2 logged out. Waiting for processes to exit. Nov 13 08:32:15.111341 systemd-logind[1597]: Removed session 2. Nov 13 08:32:15.166397 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 52184 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:15.168679 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.178683 systemd-logind[1597]: New session 3 of user core. Nov 13 08:32:15.184551 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 08:32:15.247536 sshd[1780]: Connection closed by 139.178.89.65 port 52184 Nov 13 08:32:15.248264 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:15.260469 systemd[1]: Started sshd@3-159.223.193.8:22-139.178.89.65:52186.service - OpenSSH per-connection server daemon (139.178.89.65:52186). Nov 13 08:32:15.262629 systemd[1]: sshd@2-159.223.193.8:22-139.178.89.65:52184.service: Deactivated successfully. Nov 13 08:32:15.267698 systemd[1]: session-3.scope: Deactivated successfully. Nov 13 08:32:15.269276 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. Nov 13 08:32:15.273372 systemd-logind[1597]: Removed session 3. Nov 13 08:32:15.322932 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 52186 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:15.325446 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.333479 systemd-logind[1597]: New session 4 of user core. Nov 13 08:32:15.346036 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 08:32:15.418329 sshd[1788]: Connection closed by 139.178.89.65 port 52186 Nov 13 08:32:15.419563 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:15.445637 systemd[1]: Started sshd@4-159.223.193.8:22-139.178.89.65:52196.service - OpenSSH per-connection server daemon (139.178.89.65:52196). Nov 13 08:32:15.448468 systemd[1]: sshd@3-159.223.193.8:22-139.178.89.65:52186.service: Deactivated successfully. Nov 13 08:32:15.453674 systemd[1]: session-4.scope: Deactivated successfully. Nov 13 08:32:15.456743 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Nov 13 08:32:15.458612 systemd-logind[1597]: Removed session 4. Nov 13 08:32:15.508327 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 52196 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:15.510280 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.518322 systemd-logind[1597]: New session 5 of user core. Nov 13 08:32:15.529667 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 08:32:15.607231 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 08:32:15.607622 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:32:15.626759 sudo[1797]: pam_unix(sudo:session): session closed for user root Nov 13 08:32:15.631147 sshd[1796]: Connection closed by 139.178.89.65 port 52196 Nov 13 08:32:15.632293 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:15.642624 systemd[1]: Started sshd@5-159.223.193.8:22-139.178.89.65:52212.service - OpenSSH per-connection server daemon (139.178.89.65:52212). Nov 13 08:32:15.643305 systemd[1]: sshd@4-159.223.193.8:22-139.178.89.65:52196.service: Deactivated successfully. Nov 13 08:32:15.649935 systemd[1]: session-5.scope: Deactivated successfully. Nov 13 08:32:15.652814 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Nov 13 08:32:15.658742 systemd-logind[1597]: Removed session 5. Nov 13 08:32:15.704723 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 52212 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:15.706806 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.713541 systemd-logind[1597]: New session 6 of user core. Nov 13 08:32:15.721803 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 08:32:15.788109 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 08:32:15.789468 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:32:15.796201 sudo[1807]: pam_unix(sudo:session): session closed for user root Nov 13 08:32:15.805613 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 13 08:32:15.806081 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:32:15.827721 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:32:15.879306 augenrules[1829]: No rules Nov 13 08:32:15.880233 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:32:15.880942 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:32:15.884299 sudo[1806]: pam_unix(sudo:session): session closed for user root Nov 13 08:32:15.889440 sshd[1805]: Connection closed by 139.178.89.65 port 52212 Nov 13 08:32:15.890582 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:15.901441 systemd[1]: Started sshd@6-159.223.193.8:22-139.178.89.65:52226.service - OpenSSH per-connection server daemon (139.178.89.65:52226). Nov 13 08:32:15.902874 systemd[1]: sshd@5-159.223.193.8:22-139.178.89.65:52212.service: Deactivated successfully. Nov 13 08:32:15.911504 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 08:32:15.914286 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Nov 13 08:32:15.916677 systemd-logind[1597]: Removed session 6. Nov 13 08:32:15.975683 sshd[1835]: Accepted publickey for core from 139.178.89.65 port 52226 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:32:15.977534 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:32:15.986234 systemd-logind[1597]: New session 7 of user core. Nov 13 08:32:15.993471 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 08:32:16.061601 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 08:32:16.062120 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:32:16.695998 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 08:32:16.707699 (dockerd)[1860]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 08:32:17.245124 dockerd[1860]: time="2024-11-13T08:32:17.244999178Z" level=info msg="Starting up" Nov 13 08:32:17.393652 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4026845956-merged.mount: Deactivated successfully. Nov 13 08:32:17.622088 dockerd[1860]: time="2024-11-13T08:32:17.621662238Z" level=info msg="Loading containers: start." Nov 13 08:32:17.889480 kernel: Initializing XFRM netlink socket Nov 13 08:32:18.009332 systemd-networkd[1222]: docker0: Link UP Nov 13 08:32:18.051281 dockerd[1860]: time="2024-11-13T08:32:18.051225492Z" level=info msg="Loading containers: done." Nov 13 08:32:18.084801 dockerd[1860]: time="2024-11-13T08:32:18.084727744Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 08:32:18.085403 dockerd[1860]: time="2024-11-13T08:32:18.085373965Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 13 08:32:18.085696 dockerd[1860]: time="2024-11-13T08:32:18.085681133Z" level=info msg="Daemon has completed initialization" Nov 13 08:32:18.146295 dockerd[1860]: time="2024-11-13T08:32:18.146041992Z" level=info msg="API listen on /run/docker.sock" Nov 13 08:32:18.146633 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 08:32:18.389815 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2593774587-merged.mount: Deactivated successfully. Nov 13 08:32:19.289425 containerd[1618]: time="2024-11-13T08:32:19.289347475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 13 08:32:19.628425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 08:32:19.636345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:19.839315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:19.844240 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:32:19.952137 kubelet[2072]: E1113 08:32:19.951062 2072 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:32:19.957487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:32:19.957786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:32:20.112294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582457866.mount: Deactivated successfully. Nov 13 08:32:21.980105 containerd[1618]: time="2024-11-13T08:32:21.980004095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:21.982337 containerd[1618]: time="2024-11-13T08:32:21.982004340Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 13 08:32:21.982862 containerd[1618]: time="2024-11-13T08:32:21.982773990Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:21.991001 containerd[1618]: time="2024-11-13T08:32:21.989695513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:21.995174 containerd[1618]: time="2024-11-13T08:32:21.995095729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.705666336s" Nov 13 08:32:21.995447 containerd[1618]: time="2024-11-13T08:32:21.995416123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 13 08:32:22.049491 containerd[1618]: time="2024-11-13T08:32:22.049436264Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 13 08:32:24.451416 containerd[1618]: time="2024-11-13T08:32:24.451302126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:24.453753 containerd[1618]: time="2024-11-13T08:32:24.453683660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 13 08:32:24.454605 containerd[1618]: time="2024-11-13T08:32:24.454530140Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:24.461017 containerd[1618]: time="2024-11-13T08:32:24.460625143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:24.465761 containerd[1618]: time="2024-11-13T08:32:24.463311162Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.413806796s" Nov 13 08:32:24.465761 containerd[1618]: time="2024-11-13T08:32:24.463387631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 13 08:32:24.509791 containerd[1618]: time="2024-11-13T08:32:24.509749634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 13 08:32:25.876381 containerd[1618]: time="2024-11-13T08:32:25.876299031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:25.878607 containerd[1618]: time="2024-11-13T08:32:25.878546089Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 13 08:32:25.880247 containerd[1618]: time="2024-11-13T08:32:25.880203160Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:25.883565 containerd[1618]: time="2024-11-13T08:32:25.883498969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:25.886048 containerd[1618]: time="2024-11-13T08:32:25.885992282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.3759558s" Nov 13 08:32:25.886048 containerd[1618]: time="2024-11-13T08:32:25.886055654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 13 08:32:25.913823 containerd[1618]: time="2024-11-13T08:32:25.913768289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 13 08:32:25.916106 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 13 08:32:27.183587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508265586.mount: Deactivated successfully. Nov 13 08:32:27.897811 containerd[1618]: time="2024-11-13T08:32:27.897730310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:27.900181 containerd[1618]: time="2024-11-13T08:32:27.900104085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 13 08:32:27.900729 containerd[1618]: time="2024-11-13T08:32:27.900667383Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:27.903698 containerd[1618]: time="2024-11-13T08:32:27.903596209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:27.904523 containerd[1618]: time="2024-11-13T08:32:27.904471890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.990658675s" Nov 13 08:32:27.904523 containerd[1618]: time="2024-11-13T08:32:27.904521216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 13 08:32:27.937090 containerd[1618]: time="2024-11-13T08:32:27.937033842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 08:32:28.504847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132015213.mount: Deactivated successfully. Nov 13 08:32:29.026200 systemd-resolved[1477]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 13 08:32:29.776284 containerd[1618]: time="2024-11-13T08:32:29.776185407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:29.778180 containerd[1618]: time="2024-11-13T08:32:29.778100320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 13 08:32:29.780208 containerd[1618]: time="2024-11-13T08:32:29.780146103Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:29.784716 containerd[1618]: time="2024-11-13T08:32:29.784628063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:29.786071 containerd[1618]: time="2024-11-13T08:32:29.786012510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.848667662s" Nov 13 08:32:29.786071 containerd[1618]: time="2024-11-13T08:32:29.786074974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 08:32:29.816642 containerd[1618]: time="2024-11-13T08:32:29.816561729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 13 08:32:30.128364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 08:32:30.137371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:30.347622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:30.349503 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:32:30.472023 kubelet[2228]: E1113 08:32:30.471456 2228 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:32:30.474436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:32:30.475006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:32:30.508284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138409296.mount: Deactivated successfully. Nov 13 08:32:30.521033 containerd[1618]: time="2024-11-13T08:32:30.520671171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:30.522388 containerd[1618]: time="2024-11-13T08:32:30.522198743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 13 08:32:30.525742 containerd[1618]: time="2024-11-13T08:32:30.523794053Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:30.527328 containerd[1618]: time="2024-11-13T08:32:30.527264644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:30.528362 containerd[1618]: time="2024-11-13T08:32:30.528314543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 711.671128ms" Nov 13 08:32:30.528559 containerd[1618]: time="2024-11-13T08:32:30.528537637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 13 08:32:30.573103 containerd[1618]: time="2024-11-13T08:32:30.573054415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 13 08:32:31.159558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029097681.mount: Deactivated successfully. Nov 13 08:32:33.930996 containerd[1618]: time="2024-11-13T08:32:33.929327000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:33.932001 containerd[1618]: time="2024-11-13T08:32:33.931881948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 13 08:32:33.932885 containerd[1618]: time="2024-11-13T08:32:33.932799329Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:33.940129 containerd[1618]: time="2024-11-13T08:32:33.940072188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:32:33.942864 containerd[1618]: time="2024-11-13T08:32:33.942576760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.369103495s" Nov 13 08:32:33.943213 containerd[1618]: time="2024-11-13T08:32:33.943171949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 13 08:32:37.715188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:37.726366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:37.758215 systemd[1]: Reloading requested from client PID 2359 ('systemctl') (unit session-7.scope)... Nov 13 08:32:37.758245 systemd[1]: Reloading... Nov 13 08:32:37.907005 zram_generator::config[2399]: No configuration found. Nov 13 08:32:38.075677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:32:38.151517 systemd[1]: Reloading finished in 392 ms. Nov 13 08:32:38.218052 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 13 08:32:38.218142 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 13 08:32:38.218489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:38.228597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:38.393232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:38.411072 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:32:38.496690 kubelet[2464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:32:38.496690 kubelet[2464]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:32:38.496690 kubelet[2464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:32:38.497387 kubelet[2464]: I1113 08:32:38.496724 2464 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:32:38.828620 kubelet[2464]: I1113 08:32:38.828394 2464 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 08:32:38.828620 kubelet[2464]: I1113 08:32:38.828471 2464 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:32:38.829651 kubelet[2464]: I1113 08:32:38.829395 2464 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 08:32:38.866276 kubelet[2464]: E1113 08:32:38.866185 2464 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.223.193.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.870174 kubelet[2464]: I1113 08:32:38.869971 2464 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:32:38.890189 kubelet[2464]: I1113 08:32:38.890143 2464 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:32:38.892712 kubelet[2464]: I1113 08:32:38.892600 2464 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:32:38.893969 kubelet[2464]: I1113 08:32:38.893870 2464 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 08:32:38.894209 kubelet[2464]: I1113 08:32:38.893989 2464 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:32:38.894209 kubelet[2464]: I1113 08:32:38.894004 2464 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 08:32:38.894209 kubelet[2464]: I1113 08:32:38.894168 2464 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:32:38.894384 kubelet[2464]: I1113 08:32:38.894349 2464 kubelet.go:396] "Attempting to sync node with API server" Nov 13 08:32:38.894384 kubelet[2464]: I1113 08:32:38.894382 2464 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:32:38.894450 kubelet[2464]: I1113 08:32:38.894421 2464 kubelet.go:312] "Adding apiserver pod source" Nov 13 08:32:38.894450 kubelet[2464]: I1113 08:32:38.894440 2464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:32:38.902365 kubelet[2464]: W1113 08:32:38.902297 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://159.223.193.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-e-2bf6127ade&limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.903076 kubelet[2464]: E1113 08:32:38.903039 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.223.193.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-e-2bf6127ade&limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.904244 kubelet[2464]: I1113 08:32:38.904218 2464 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:32:38.906231 kubelet[2464]: W1113 08:32:38.906079 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://159.223.193.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.906231 kubelet[2464]: E1113 08:32:38.906153 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.223.193.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.910611 kubelet[2464]: I1113 08:32:38.910536 2464 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:32:38.912430 kubelet[2464]: W1113 08:32:38.912358 2464 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 08:32:38.913649 kubelet[2464]: I1113 08:32:38.913556 2464 server.go:1256] "Started kubelet" Nov 13 08:32:38.915001 kubelet[2464]: I1113 08:32:38.914044 2464 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:32:38.915664 kubelet[2464]: I1113 08:32:38.915638 2464 server.go:461] "Adding debug handlers to kubelet server" Nov 13 08:32:38.919809 kubelet[2464]: I1113 08:32:38.919767 2464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:32:38.921847 kubelet[2464]: E1113 08:32:38.921793 2464 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.193.8:6443/api/v1/namespaces/default/events\": dial tcp 159.223.193.8:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.0.0-e-2bf6127ade.18077a088e550d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-e-2bf6127ade,UID:ci-4152.0.0-e-2bf6127ade,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-e-2bf6127ade,},FirstTimestamp:2024-11-13 08:32:38.91351077 +0000 UTC m=+0.496250399,LastTimestamp:2024-11-13 08:32:38.91351077 +0000 UTC m=+0.496250399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-e-2bf6127ade,}" Nov 13 08:32:38.921847 kubelet[2464]: I1113 08:32:38.919774 2464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:32:38.923205 kubelet[2464]: I1113 08:32:38.922182 2464 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:32:38.924274 kubelet[2464]: I1113 08:32:38.924239 2464 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 08:32:38.924384 kubelet[2464]: I1113 08:32:38.924374 2464 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 08:32:38.924461 kubelet[2464]: I1113 08:32:38.924448 2464 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 08:32:38.925607 kubelet[2464]: W1113 08:32:38.924885 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://159.223.193.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.925607 kubelet[2464]: E1113 08:32:38.924938 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.223.193.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.925607 kubelet[2464]: E1113 08:32:38.925499 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.193.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-e-2bf6127ade?timeout=10s\": dial tcp 159.223.193.8:6443: connect: connection refused" interval="200ms" Nov 13 08:32:38.939578 kubelet[2464]: I1113 08:32:38.939493 2464 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:32:38.939867 kubelet[2464]: I1113 08:32:38.939853 2464 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:32:38.940119 kubelet[2464]: I1113 08:32:38.940094 2464 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:32:38.966046 kubelet[2464]: I1113 08:32:38.965245 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:32:38.972107 kubelet[2464]: I1113 08:32:38.972048 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:32:38.972348 kubelet[2464]: I1113 08:32:38.972336 2464 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:32:38.972450 kubelet[2464]: I1113 08:32:38.972439 2464 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 08:32:38.972662 kubelet[2464]: E1113 08:32:38.972643 2464 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:32:38.979472 kubelet[2464]: W1113 08:32:38.979281 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://159.223.193.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.979682 kubelet[2464]: E1113 08:32:38.979670 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.223.193.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:38.987103 kubelet[2464]: I1113 08:32:38.986992 2464 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:32:38.987373 kubelet[2464]: I1113 08:32:38.987354 2464 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:32:38.987467 kubelet[2464]: I1113 08:32:38.987459 2464 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:32:38.992169 kubelet[2464]: I1113 08:32:38.992128 2464 policy_none.go:49] "None policy: Start" Nov 13 08:32:38.993889 kubelet[2464]: I1113 08:32:38.993456 2464 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:32:38.993889 kubelet[2464]: I1113 08:32:38.993491 2464 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:32:39.017925 kubelet[2464]: I1113 08:32:39.017866 2464 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:32:39.018363 kubelet[2464]: I1113 08:32:39.018319 2464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:32:39.024007 kubelet[2464]: E1113 08:32:39.023863 2464 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.0.0-e-2bf6127ade\" not found" Nov 13 08:32:39.026244 kubelet[2464]: I1113 08:32:39.025676 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.026923 kubelet[2464]: E1113 08:32:39.026892 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.193.8:6443/api/v1/nodes\": dial tcp 159.223.193.8:6443: connect: connection refused" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.073466 kubelet[2464]: I1113 08:32:39.073416 2464 topology_manager.go:215] "Topology Admit Handler" podUID="3dc06ac4ba17095980b7ca937495374b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.077687 kubelet[2464]: I1113 08:32:39.077617 2464 topology_manager.go:215] "Topology Admit Handler" podUID="0267b01217ccde72c2f9ab75eaeaf58e" podNamespace="kube-system" podName="kube-scheduler-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.080361 kubelet[2464]: I1113 08:32:39.079298 2464 topology_manager.go:215] "Topology Admit Handler" podUID="d37d8c349dde13ce97dcfacdc5991809" podNamespace="kube-system" podName="kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.126016 kubelet[2464]: I1113 08:32:39.125945 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.126272 kubelet[2464]: I1113 08:32:39.126243 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.126506 kubelet[2464]: E1113 08:32:39.126467 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.193.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-e-2bf6127ade?timeout=10s\": dial tcp 159.223.193.8:6443: connect: connection refused" interval="400ms" Nov 13 08:32:39.126627 kubelet[2464]: I1113 08:32:39.126484 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127041 kubelet[2464]: I1113 08:32:39.127017 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127145 kubelet[2464]: I1113 08:32:39.127136 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127292 kubelet[2464]: I1113 08:32:39.127283 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127361 kubelet[2464]: I1113 08:32:39.127355 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127429 kubelet[2464]: I1113 08:32:39.127422 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.127489 kubelet[2464]: I1113 08:32:39.127482 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0267b01217ccde72c2f9ab75eaeaf58e-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-e-2bf6127ade\" (UID: \"0267b01217ccde72c2f9ab75eaeaf58e\") " pod="kube-system/kube-scheduler-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.228986 kubelet[2464]: I1113 08:32:39.228838 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.229438 kubelet[2464]: E1113 08:32:39.229409 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.193.8:6443/api/v1/nodes\": dial tcp 159.223.193.8:6443: connect: connection refused" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.384915 kubelet[2464]: E1113 08:32:39.384846 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:39.385994 containerd[1618]: time="2024-11-13T08:32:39.385756439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-e-2bf6127ade,Uid:3dc06ac4ba17095980b7ca937495374b,Namespace:kube-system,Attempt:0,}" Nov 13 08:32:39.389236 kubelet[2464]: E1113 08:32:39.388265 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:39.389402 containerd[1618]: time="2024-11-13T08:32:39.388824889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-e-2bf6127ade,Uid:0267b01217ccde72c2f9ab75eaeaf58e,Namespace:kube-system,Attempt:0,}" Nov 13 08:32:39.389389 systemd-resolved[1477]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 13 08:32:39.395121 kubelet[2464]: E1113 08:32:39.394512 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:39.396080 containerd[1618]: time="2024-11-13T08:32:39.395698613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-e-2bf6127ade,Uid:d37d8c349dde13ce97dcfacdc5991809,Namespace:kube-system,Attempt:0,}" Nov 13 08:32:39.528106 kubelet[2464]: E1113 08:32:39.528056 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.193.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-e-2bf6127ade?timeout=10s\": dial tcp 159.223.193.8:6443: connect: connection refused" interval="800ms" Nov 13 08:32:39.633572 kubelet[2464]: I1113 08:32:39.633501 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.637018 kubelet[2464]: E1113 08:32:39.633939 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.193.8:6443/api/v1/nodes\": dial tcp 159.223.193.8:6443: connect: connection refused" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:39.894193 kubelet[2464]: W1113 08:32:39.893940 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://159.223.193.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:39.895001 kubelet[2464]: E1113 08:32:39.894609 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.223.193.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:39.924119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772662474.mount: Deactivated successfully. Nov 13 08:32:39.933431 containerd[1618]: time="2024-11-13T08:32:39.933328171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:32:39.936168 containerd[1618]: time="2024-11-13T08:32:39.936080565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:32:39.938281 containerd[1618]: time="2024-11-13T08:32:39.938202314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 13 08:32:39.939575 containerd[1618]: time="2024-11-13T08:32:39.939477254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:32:39.941615 containerd[1618]: time="2024-11-13T08:32:39.941556678Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:32:39.944018 containerd[1618]: time="2024-11-13T08:32:39.943325195Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:32:39.944018 containerd[1618]: time="2024-11-13T08:32:39.943470409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:32:39.945988 containerd[1618]: time="2024-11-13T08:32:39.945880992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:32:39.949869 containerd[1618]: time="2024-11-13T08:32:39.949625444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.80057ms" Nov 13 08:32:39.952007 containerd[1618]: time="2024-11-13T08:32:39.951924338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 566.035439ms" Nov 13 08:32:39.952176 containerd[1618]: time="2024-11-13T08:32:39.952122049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.179157ms" Nov 13 08:32:39.965533 kubelet[2464]: W1113 08:32:39.965426 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://159.223.193.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-e-2bf6127ade&limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:39.965533 kubelet[2464]: E1113 08:32:39.965519 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.223.193.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-e-2bf6127ade&limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.044001 kubelet[2464]: W1113 08:32:40.043836 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://159.223.193.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.044001 kubelet[2464]: E1113 08:32:40.043924 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.223.193.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.152400 containerd[1618]: time="2024-11-13T08:32:40.150354020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:32:40.152400 containerd[1618]: time="2024-11-13T08:32:40.151659628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:32:40.152400 containerd[1618]: time="2024-11-13T08:32:40.151698435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.153566 containerd[1618]: time="2024-11-13T08:32:40.152638667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.154910 containerd[1618]: time="2024-11-13T08:32:40.154684814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:32:40.155196 containerd[1618]: time="2024-11-13T08:32:40.155135589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:32:40.155410 containerd[1618]: time="2024-11-13T08:32:40.155347494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.155724 containerd[1618]: time="2024-11-13T08:32:40.155669804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.164249 containerd[1618]: time="2024-11-13T08:32:40.164052352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:32:40.164249 containerd[1618]: time="2024-11-13T08:32:40.164187905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:32:40.164564 containerd[1618]: time="2024-11-13T08:32:40.164214799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.164564 containerd[1618]: time="2024-11-13T08:32:40.164376174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:32:40.306263 containerd[1618]: time="2024-11-13T08:32:40.305940005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-e-2bf6127ade,Uid:3dc06ac4ba17095980b7ca937495374b,Namespace:kube-system,Attempt:0,} returns sandbox id \"69a81f2a90b9d29e328af5bf9cf29042c2051055dd9c4ed7d12160e5fd091eb1\"" Nov 13 08:32:40.308017 kubelet[2464]: E1113 08:32:40.307984 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:40.320093 containerd[1618]: time="2024-11-13T08:32:40.319904810Z" level=info msg="CreateContainer within sandbox \"69a81f2a90b9d29e328af5bf9cf29042c2051055dd9c4ed7d12160e5fd091eb1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 08:32:40.330220 kubelet[2464]: E1113 08:32:40.330118 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.193.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-e-2bf6127ade?timeout=10s\": dial tcp 159.223.193.8:6443: connect: connection refused" interval="1.6s" Nov 13 08:32:40.336888 containerd[1618]: time="2024-11-13T08:32:40.335977662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-e-2bf6127ade,Uid:d37d8c349dde13ce97dcfacdc5991809,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c08f32dfb085bb3903c73e73720c7d12be8a1e76abc02adf60bd1b9c947f65e\"" Nov 13 08:32:40.342059 kubelet[2464]: E1113 08:32:40.342019 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:40.342722 containerd[1618]: time="2024-11-13T08:32:40.342666510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-e-2bf6127ade,Uid:0267b01217ccde72c2f9ab75eaeaf58e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0905c7d5eafb306112006d1a6c5105a1d7ec4b14e83a716a65a0f318f56a9350\"" Nov 13 08:32:40.347629 kubelet[2464]: E1113 08:32:40.347175 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:40.351531 containerd[1618]: time="2024-11-13T08:32:40.351013674Z" level=info msg="CreateContainer within sandbox \"1c08f32dfb085bb3903c73e73720c7d12be8a1e76abc02adf60bd1b9c947f65e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 08:32:40.352136 containerd[1618]: time="2024-11-13T08:32:40.352072827Z" level=info msg="CreateContainer within sandbox \"0905c7d5eafb306112006d1a6c5105a1d7ec4b14e83a716a65a0f318f56a9350\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 08:32:40.384275 containerd[1618]: time="2024-11-13T08:32:40.384205625Z" level=info msg="CreateContainer within sandbox \"69a81f2a90b9d29e328af5bf9cf29042c2051055dd9c4ed7d12160e5fd091eb1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e775ba5b0ec0d66127a433816743229b081fb54f86fe3c123375fbe7b786fbb1\"" Nov 13 08:32:40.385225 containerd[1618]: time="2024-11-13T08:32:40.385183891Z" level=info msg="StartContainer for \"e775ba5b0ec0d66127a433816743229b081fb54f86fe3c123375fbe7b786fbb1\"" Nov 13 08:32:40.390027 containerd[1618]: time="2024-11-13T08:32:40.389779425Z" level=info msg="CreateContainer within sandbox \"1c08f32dfb085bb3903c73e73720c7d12be8a1e76abc02adf60bd1b9c947f65e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da85e6667f42e39085a4ca2f9e1336fb5337bafd1c2680b227d7a663f789ef08\"" Nov 13 08:32:40.391467 containerd[1618]: time="2024-11-13T08:32:40.391424218Z" level=info msg="StartContainer for \"da85e6667f42e39085a4ca2f9e1336fb5337bafd1c2680b227d7a663f789ef08\"" Nov 13 08:32:40.406459 containerd[1618]: time="2024-11-13T08:32:40.406228128Z" level=info msg="CreateContainer within sandbox \"0905c7d5eafb306112006d1a6c5105a1d7ec4b14e83a716a65a0f318f56a9350\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"469767889a990016f4885227bb7bbd7831c9db1e3a7b2e7bf8773e4b4019a004\"" Nov 13 08:32:40.407932 containerd[1618]: time="2024-11-13T08:32:40.407714016Z" level=info msg="StartContainer for \"469767889a990016f4885227bb7bbd7831c9db1e3a7b2e7bf8773e4b4019a004\"" Nov 13 08:32:40.437996 kubelet[2464]: I1113 08:32:40.435925 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:40.437996 kubelet[2464]: E1113 08:32:40.436853 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.223.193.8:6443/api/v1/nodes\": dial tcp 159.223.193.8:6443: connect: connection refused" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:40.542082 kubelet[2464]: W1113 08:32:40.541976 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://159.223.193.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.542082 kubelet[2464]: E1113 08:32:40.542064 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.223.193.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.544296 containerd[1618]: time="2024-11-13T08:32:40.544242594Z" level=info msg="StartContainer for \"e775ba5b0ec0d66127a433816743229b081fb54f86fe3c123375fbe7b786fbb1\" returns successfully" Nov 13 08:32:40.582107 containerd[1618]: time="2024-11-13T08:32:40.582030568Z" level=info msg="StartContainer for \"469767889a990016f4885227bb7bbd7831c9db1e3a7b2e7bf8773e4b4019a004\" returns successfully" Nov 13 08:32:40.586317 containerd[1618]: time="2024-11-13T08:32:40.586243227Z" level=info msg="StartContainer for \"da85e6667f42e39085a4ca2f9e1336fb5337bafd1c2680b227d7a663f789ef08\" returns successfully" Nov 13 08:32:40.917584 kubelet[2464]: E1113 08:32:40.917202 2464 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.223.193.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.223.193.8:6443: connect: connection refused Nov 13 08:32:40.997785 kubelet[2464]: E1113 08:32:40.997728 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:41.009764 kubelet[2464]: E1113 08:32:41.009720 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:41.020086 kubelet[2464]: E1113 08:32:41.019934 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:42.024578 kubelet[2464]: E1113 08:32:42.024415 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:42.040433 kubelet[2464]: I1113 08:32:42.039292 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:43.641012 kubelet[2464]: E1113 08:32:43.640031 2464 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.0.0-e-2bf6127ade\" not found" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:43.679379 kubelet[2464]: I1113 08:32:43.679094 2464 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:43.722554 kubelet[2464]: E1113 08:32:43.722340 2464 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.0.0-e-2bf6127ade.18077a088e550d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-e-2bf6127ade,UID:ci-4152.0.0-e-2bf6127ade,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-e-2bf6127ade,},FirstTimestamp:2024-11-13 08:32:38.91351077 +0000 UTC m=+0.496250399,LastTimestamp:2024-11-13 08:32:38.91351077 +0000 UTC m=+0.496250399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-e-2bf6127ade,}" Nov 13 08:32:43.800067 kubelet[2464]: E1113 08:32:43.798344 2464 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.0.0-e-2bf6127ade.18077a089264b200 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-e-2bf6127ade,UID:ci-4152.0.0-e-2bf6127ade,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4152.0.0-e-2bf6127ade status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-e-2bf6127ade,},FirstTimestamp:2024-11-13 08:32:38.9816448 +0000 UTC m=+0.564384279,LastTimestamp:2024-11-13 08:32:38.9816448 +0000 UTC m=+0.564384279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-e-2bf6127ade,}" Nov 13 08:32:43.901031 kubelet[2464]: I1113 08:32:43.900136 2464 apiserver.go:52] "Watching apiserver" Nov 13 08:32:43.924940 kubelet[2464]: I1113 08:32:43.924855 2464 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 08:32:44.250064 kubelet[2464]: E1113 08:32:44.249078 2464 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:44.250064 kubelet[2464]: E1113 08:32:44.249916 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:46.200693 kubelet[2464]: W1113 08:32:46.200603 2464 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:32:46.203084 kubelet[2464]: E1113 08:32:46.202893 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:47.044026 kubelet[2464]: E1113 08:32:47.043903 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:47.616804 kubelet[2464]: W1113 08:32:47.616543 2464 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:32:47.618444 kubelet[2464]: E1113 08:32:47.617014 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:47.707173 systemd[1]: Reloading requested from client PID 2739 ('systemctl') (unit session-7.scope)... Nov 13 08:32:47.707204 systemd[1]: Reloading... Nov 13 08:32:47.836983 zram_generator::config[2778]: No configuration found. Nov 13 08:32:48.047177 kubelet[2464]: E1113 08:32:48.047112 2464 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:48.056086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:32:48.145205 systemd[1]: Reloading finished in 437 ms. Nov 13 08:32:48.200389 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:48.201698 kubelet[2464]: I1113 08:32:48.201261 2464 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:32:48.214432 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 08:32:48.215287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:48.220552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:32:48.437273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:32:48.458613 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:32:48.552085 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:32:48.552085 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:32:48.552085 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:32:48.552631 kubelet[2839]: I1113 08:32:48.552077 2839 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:32:48.563002 kubelet[2839]: I1113 08:32:48.562015 2839 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 13 08:32:48.563002 kubelet[2839]: I1113 08:32:48.562055 2839 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:32:48.563002 kubelet[2839]: I1113 08:32:48.562410 2839 server.go:919] "Client rotation is on, will bootstrap in background" Nov 13 08:32:48.566911 kubelet[2839]: I1113 08:32:48.565887 2839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 08:32:48.572767 kubelet[2839]: I1113 08:32:48.572462 2839 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:32:48.583108 kubelet[2839]: I1113 08:32:48.583073 2839 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587043 2839 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587472 2839 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587520 2839 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587536 2839 container_manager_linux.go:301] "Creating device plugin manager" Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587593 2839 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:32:48.588030 kubelet[2839]: I1113 08:32:48.587726 2839 kubelet.go:396] "Attempting to sync node with API server" Nov 13 08:32:48.588488 kubelet[2839]: I1113 08:32:48.587744 2839 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:32:48.588488 kubelet[2839]: I1113 08:32:48.587773 2839 kubelet.go:312] "Adding apiserver pod source" Nov 13 08:32:48.588488 kubelet[2839]: I1113 08:32:48.587785 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:32:48.597123 kubelet[2839]: I1113 08:32:48.597065 2839 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:32:48.597623 kubelet[2839]: I1113 08:32:48.597587 2839 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:32:48.599594 kubelet[2839]: I1113 08:32:48.599555 2839 server.go:1256] "Started kubelet" Nov 13 08:32:48.617166 kubelet[2839]: I1113 08:32:48.617137 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:32:48.617894 kubelet[2839]: I1113 08:32:48.617872 2839 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:32:48.618275 kubelet[2839]: I1113 08:32:48.618098 2839 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:32:48.618563 kubelet[2839]: I1113 08:32:48.618535 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:32:48.624788 kubelet[2839]: I1113 08:32:48.624463 2839 server.go:461] "Adding debug handlers to kubelet server" Nov 13 08:32:48.629487 kubelet[2839]: I1113 08:32:48.629426 2839 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 13 08:32:48.648623 kubelet[2839]: I1113 08:32:48.648566 2839 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 13 08:32:48.648788 kubelet[2839]: I1113 08:32:48.648748 2839 reconciler_new.go:29] "Reconciler: start to sync state" Nov 13 08:32:48.658365 kubelet[2839]: I1113 08:32:48.656707 2839 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:32:48.658365 kubelet[2839]: I1113 08:32:48.656830 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:32:48.658365 kubelet[2839]: E1113 08:32:48.657433 2839 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 08:32:48.660140 kubelet[2839]: I1113 08:32:48.660108 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:32:48.662277 kubelet[2839]: I1113 08:32:48.662248 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:32:48.662472 kubelet[2839]: I1113 08:32:48.662461 2839 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:32:48.662564 kubelet[2839]: I1113 08:32:48.662553 2839 kubelet.go:2329] "Starting kubelet main sync loop" Nov 13 08:32:48.663126 kubelet[2839]: E1113 08:32:48.663092 2839 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:32:48.664710 kubelet[2839]: I1113 08:32:48.664499 2839 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:32:48.707662 sudo[2867]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 13 08:32:48.710317 sudo[2867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 13 08:32:48.733513 kubelet[2839]: I1113 08:32:48.733477 2839 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.759414 kubelet[2839]: I1113 08:32:48.757623 2839 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.760661 kubelet[2839]: I1113 08:32:48.760614 2839 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.764823 kubelet[2839]: E1113 08:32:48.764575 2839 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 13 08:32:48.792993 kubelet[2839]: I1113 08:32:48.792945 2839 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:32:48.793940 kubelet[2839]: I1113 08:32:48.793435 2839 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:32:48.793940 kubelet[2839]: I1113 08:32:48.793463 2839 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:32:48.794439 kubelet[2839]: I1113 08:32:48.794348 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 08:32:48.795161 kubelet[2839]: I1113 08:32:48.795092 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 08:32:48.795161 kubelet[2839]: I1113 08:32:48.795117 2839 policy_none.go:49] "None policy: Start" Nov 13 08:32:48.797225 kubelet[2839]: I1113 08:32:48.797151 2839 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:32:48.797225 kubelet[2839]: I1113 08:32:48.797210 2839 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:32:48.797797 kubelet[2839]: I1113 08:32:48.797507 2839 state_mem.go:75] "Updated machine memory state" Nov 13 08:32:48.804394 kubelet[2839]: I1113 08:32:48.804355 2839 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:32:48.807113 kubelet[2839]: I1113 08:32:48.806638 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:32:48.965311 kubelet[2839]: I1113 08:32:48.965149 2839 topology_manager.go:215] "Topology Admit Handler" podUID="3dc06ac4ba17095980b7ca937495374b" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.965553 kubelet[2839]: I1113 08:32:48.965320 2839 topology_manager.go:215] "Topology Admit Handler" podUID="0267b01217ccde72c2f9ab75eaeaf58e" podNamespace="kube-system" podName="kube-scheduler-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.965553 kubelet[2839]: I1113 08:32:48.965375 2839 topology_manager.go:215] "Topology Admit Handler" podUID="d37d8c349dde13ce97dcfacdc5991809" podNamespace="kube-system" podName="kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.991721 kubelet[2839]: W1113 08:32:48.991674 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:32:48.991891 kubelet[2839]: E1113 08:32:48.991805 2839 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" already exists" pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.992880 kubelet[2839]: W1113 08:32:48.992447 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:32:48.992880 kubelet[2839]: E1113 08:32:48.992533 2839 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.0.0-e-2bf6127ade\" already exists" pod="kube-system/kube-scheduler-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:48.992880 kubelet[2839]: W1113 08:32:48.992684 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:32:49.051373 kubelet[2839]: I1113 08:32:49.051317 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051373 kubelet[2839]: I1113 08:32:49.051381 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051621 kubelet[2839]: I1113 08:32:49.051415 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0267b01217ccde72c2f9ab75eaeaf58e-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-e-2bf6127ade\" (UID: \"0267b01217ccde72c2f9ab75eaeaf58e\") " pod="kube-system/kube-scheduler-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051621 kubelet[2839]: I1113 08:32:49.051436 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051621 kubelet[2839]: I1113 08:32:49.051457 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051621 kubelet[2839]: I1113 08:32:49.051476 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37d8c349dde13ce97dcfacdc5991809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-e-2bf6127ade\" (UID: \"d37d8c349dde13ce97dcfacdc5991809\") " pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051621 kubelet[2839]: I1113 08:32:49.051495 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051819 kubelet[2839]: I1113 08:32:49.051515 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.051819 kubelet[2839]: I1113 08:32:49.051580 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dc06ac4ba17095980b7ca937495374b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-e-2bf6127ade\" (UID: \"3dc06ac4ba17095980b7ca937495374b\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" Nov 13 08:32:49.298014 kubelet[2839]: E1113 08:32:49.295088 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.298014 kubelet[2839]: E1113 08:32:49.297144 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.298014 kubelet[2839]: E1113 08:32:49.297325 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.547615 sudo[2867]: pam_unix(sudo:session): session closed for user root Nov 13 08:32:49.591847 kubelet[2839]: I1113 08:32:49.590980 2839 apiserver.go:52] "Watching apiserver" Nov 13 08:32:49.649340 kubelet[2839]: I1113 08:32:49.649250 2839 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 13 08:32:49.716934 kubelet[2839]: E1113 08:32:49.716891 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.719763 kubelet[2839]: E1113 08:32:49.719719 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.721708 kubelet[2839]: E1113 08:32:49.721658 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:49.762648 kubelet[2839]: I1113 08:32:49.762516 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.0.0-e-2bf6127ade" podStartSLOduration=1.762375718 podStartE2EDuration="1.762375718s" podCreationTimestamp="2024-11-13 08:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:32:49.758604735 +0000 UTC m=+1.294156842" watchObservedRunningTime="2024-11-13 08:32:49.762375718 +0000 UTC m=+1.297927825" Nov 13 08:32:49.796867 kubelet[2839]: I1113 08:32:49.796817 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.0.0-e-2bf6127ade" podStartSLOduration=2.79675536 podStartE2EDuration="2.79675536s" podCreationTimestamp="2024-11-13 08:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:32:49.778355434 +0000 UTC m=+1.313907526" watchObservedRunningTime="2024-11-13 08:32:49.79675536 +0000 UTC m=+1.332307444" Nov 13 08:32:49.818626 kubelet[2839]: I1113 08:32:49.818577 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.0.0-e-2bf6127ade" podStartSLOduration=3.818522225 podStartE2EDuration="3.818522225s" podCreationTimestamp="2024-11-13 08:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:32:49.797163734 +0000 UTC m=+1.332715816" watchObservedRunningTime="2024-11-13 08:32:49.818522225 +0000 UTC m=+1.354074289" Nov 13 08:32:50.720054 kubelet[2839]: E1113 08:32:50.718825 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:50.720054 kubelet[2839]: E1113 08:32:50.719874 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:50.751687 kubelet[2839]: E1113 08:32:50.751625 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:51.561719 sudo[1842]: pam_unix(sudo:session): session closed for user root Nov 13 08:32:51.566089 sshd[1841]: Connection closed by 139.178.89.65 port 52226 Nov 13 08:32:51.568045 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Nov 13 08:32:51.571779 systemd[1]: sshd@6-159.223.193.8:22-139.178.89.65:52226.service: Deactivated successfully. Nov 13 08:32:51.578132 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Nov 13 08:32:51.578636 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 08:32:51.580553 systemd-logind[1597]: Removed session 7. Nov 13 08:32:51.722063 kubelet[2839]: E1113 08:32:51.720797 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:51.722063 kubelet[2839]: E1113 08:32:51.720868 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:51.722063 kubelet[2839]: E1113 08:32:51.721893 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:51.758441 update_engine[1602]: I20241113 08:32:51.758306 1602 update_attempter.cc:509] Updating boot flags... Nov 13 08:32:51.808125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2914) Nov 13 08:32:51.889017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2912) Nov 13 08:32:52.723042 kubelet[2839]: E1113 08:32:52.722981 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:32:53.725710 kubelet[2839]: E1113 08:32:53.725669 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:01.083473 kubelet[2839]: I1113 08:33:01.083239 2839 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 08:33:01.086541 containerd[1618]: time="2024-11-13T08:33:01.084911796Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 08:33:01.087691 kubelet[2839]: I1113 08:33:01.085501 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 08:33:01.959059 kubelet[2839]: I1113 08:33:01.957487 2839 topology_manager.go:215] "Topology Admit Handler" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" podNamespace="kube-system" podName="cilium-rflwn" Nov 13 08:33:01.962974 kubelet[2839]: I1113 08:33:01.962440 2839 topology_manager.go:215] "Topology Admit Handler" podUID="71319eb4-94ae-4ccb-80ef-fe01918a4a59" podNamespace="kube-system" podName="kube-proxy-mggnp" Nov 13 08:33:02.055985 kubelet[2839]: I1113 08:33:02.054580 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-lib-modules\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.056584 kubelet[2839]: I1113 08:33:02.056349 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hostproc\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.056584 kubelet[2839]: I1113 08:33:02.056402 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71319eb4-94ae-4ccb-80ef-fe01918a4a59-kube-proxy\") pod \"kube-proxy-mggnp\" (UID: \"71319eb4-94ae-4ccb-80ef-fe01918a4a59\") " pod="kube-system/kube-proxy-mggnp" Nov 13 08:33:02.056584 kubelet[2839]: I1113 08:33:02.056448 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-bpf-maps\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.056584 kubelet[2839]: I1113 08:33:02.056486 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/71319eb4-94ae-4ccb-80ef-fe01918a4a59-kube-api-access-nprcb\") pod \"kube-proxy-mggnp\" (UID: \"71319eb4-94ae-4ccb-80ef-fe01918a4a59\") " pod="kube-system/kube-proxy-mggnp" Nov 13 08:33:02.056584 kubelet[2839]: I1113 08:33:02.056538 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-net\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.059265 kubelet[2839]: I1113 08:33:02.056570 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-xtables-lock\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.059265 kubelet[2839]: I1113 08:33:02.059183 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-config-path\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.061118 kubelet[2839]: I1113 08:33:02.059538 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvprc\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-kube-api-access-vvprc\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.061118 kubelet[2839]: I1113 08:33:02.061057 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-etc-cni-netd\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.062126 kubelet[2839]: I1113 08:33:02.061839 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-kernel\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.062126 kubelet[2839]: I1113 08:33:02.061911 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71319eb4-94ae-4ccb-80ef-fe01918a4a59-xtables-lock\") pod \"kube-proxy-mggnp\" (UID: \"71319eb4-94ae-4ccb-80ef-fe01918a4a59\") " pod="kube-system/kube-proxy-mggnp" Nov 13 08:33:02.062126 kubelet[2839]: I1113 08:33:02.062021 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-cgroup\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.062126 kubelet[2839]: I1113 08:33:02.062054 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cni-path\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.063401 kubelet[2839]: I1113 08:33:02.063010 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-clustermesh-secrets\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.063401 kubelet[2839]: I1113 08:33:02.063113 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hubble-tls\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.063826 kubelet[2839]: I1113 08:33:02.063637 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71319eb4-94ae-4ccb-80ef-fe01918a4a59-lib-modules\") pod \"kube-proxy-mggnp\" (UID: \"71319eb4-94ae-4ccb-80ef-fe01918a4a59\") " pod="kube-system/kube-proxy-mggnp" Nov 13 08:33:02.063826 kubelet[2839]: I1113 08:33:02.063712 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-run\") pod \"cilium-rflwn\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " pod="kube-system/cilium-rflwn" Nov 13 08:33:02.241403 kubelet[2839]: I1113 08:33:02.241111 2839 topology_manager.go:215] "Topology Admit Handler" podUID="eb2b0e36-7176-4cc7-8c51-4111fcd158de" podNamespace="kube-system" podName="cilium-operator-5cc964979-688kk" Nov 13 08:33:02.286905 kubelet[2839]: E1113 08:33:02.286485 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:02.295499 containerd[1618]: time="2024-11-13T08:33:02.294266287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mggnp,Uid:71319eb4-94ae-4ccb-80ef-fe01918a4a59,Namespace:kube-system,Attempt:0,}" Nov 13 08:33:02.298988 kubelet[2839]: E1113 08:33:02.297938 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:02.302203 containerd[1618]: time="2024-11-13T08:33:02.301980448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rflwn,Uid:a50798f5-9e4a-48fb-994c-1d7c4cb873d2,Namespace:kube-system,Attempt:0,}" Nov 13 08:33:02.367706 kubelet[2839]: I1113 08:33:02.367419 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdt5\" (UniqueName: \"kubernetes.io/projected/eb2b0e36-7176-4cc7-8c51-4111fcd158de-kube-api-access-dpdt5\") pod \"cilium-operator-5cc964979-688kk\" (UID: \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\") " pod="kube-system/cilium-operator-5cc964979-688kk" Nov 13 08:33:02.367706 kubelet[2839]: I1113 08:33:02.367507 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb2b0e36-7176-4cc7-8c51-4111fcd158de-cilium-config-path\") pod \"cilium-operator-5cc964979-688kk\" (UID: \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\") " pod="kube-system/cilium-operator-5cc964979-688kk" Nov 13 08:33:02.436033 containerd[1618]: time="2024-11-13T08:33:02.432830893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:33:02.436033 containerd[1618]: time="2024-11-13T08:33:02.432943126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:33:02.436033 containerd[1618]: time="2024-11-13T08:33:02.432998220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.436033 containerd[1618]: time="2024-11-13T08:33:02.433176534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.437628 containerd[1618]: time="2024-11-13T08:33:02.435529325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:33:02.437628 containerd[1618]: time="2024-11-13T08:33:02.435640379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:33:02.437628 containerd[1618]: time="2024-11-13T08:33:02.435670506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.437628 containerd[1618]: time="2024-11-13T08:33:02.435928485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.554778 kubelet[2839]: E1113 08:33:02.554269 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:02.559995 containerd[1618]: time="2024-11-13T08:33:02.559326455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-688kk,Uid:eb2b0e36-7176-4cc7-8c51-4111fcd158de,Namespace:kube-system,Attempt:0,}" Nov 13 08:33:02.578090 containerd[1618]: time="2024-11-13T08:33:02.577098806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mggnp,Uid:71319eb4-94ae-4ccb-80ef-fe01918a4a59,Namespace:kube-system,Attempt:0,} returns sandbox id \"de5244659fe1d41315050547b50317704bcceb5b1e25beab0840ac8d206c20cd\"" Nov 13 08:33:02.587225 kubelet[2839]: E1113 08:33:02.587185 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:02.602213 containerd[1618]: time="2024-11-13T08:33:02.601935052Z" level=info msg="CreateContainer within sandbox \"de5244659fe1d41315050547b50317704bcceb5b1e25beab0840ac8d206c20cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 08:33:02.617295 containerd[1618]: time="2024-11-13T08:33:02.617202894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rflwn,Uid:a50798f5-9e4a-48fb-994c-1d7c4cb873d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\"" Nov 13 08:33:02.619485 kubelet[2839]: E1113 08:33:02.619351 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:02.641086 containerd[1618]: time="2024-11-13T08:33:02.640687521Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 13 08:33:02.703193 containerd[1618]: time="2024-11-13T08:33:02.703113864Z" level=info msg="CreateContainer within sandbox \"de5244659fe1d41315050547b50317704bcceb5b1e25beab0840ac8d206c20cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"314708b4f83168a5d4768bee497bd8186f104af2492d9456e0ab71766ad3acbb\"" Nov 13 08:33:02.705699 containerd[1618]: time="2024-11-13T08:33:02.705549719Z" level=info msg="StartContainer for \"314708b4f83168a5d4768bee497bd8186f104af2492d9456e0ab71766ad3acbb\"" Nov 13 08:33:02.767234 containerd[1618]: time="2024-11-13T08:33:02.759156684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:33:02.767234 containerd[1618]: time="2024-11-13T08:33:02.759262717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:33:02.767234 containerd[1618]: time="2024-11-13T08:33:02.759288910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.769913 containerd[1618]: time="2024-11-13T08:33:02.769751974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:02.877330 containerd[1618]: time="2024-11-13T08:33:02.873854519Z" level=info msg="StartContainer for \"314708b4f83168a5d4768bee497bd8186f104af2492d9456e0ab71766ad3acbb\" returns successfully" Nov 13 08:33:02.896071 containerd[1618]: time="2024-11-13T08:33:02.895824289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-688kk,Uid:eb2b0e36-7176-4cc7-8c51-4111fcd158de,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\"" Nov 13 08:33:02.902132 kubelet[2839]: E1113 08:33:02.902087 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:03.761505 kubelet[2839]: E1113 08:33:03.760818 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:04.768459 kubelet[2839]: E1113 08:33:04.768382 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:09.966419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243820817.mount: Deactivated successfully. Nov 13 08:33:13.048066 containerd[1618]: time="2024-11-13T08:33:13.047989192Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:33:13.049613 containerd[1618]: time="2024-11-13T08:33:13.049567052Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735363" Nov 13 08:33:13.052131 containerd[1618]: time="2024-11-13T08:33:13.050999599Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:33:13.052866 containerd[1618]: time="2024-11-13T08:33:13.052813172Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.412035546s" Nov 13 08:33:13.052930 containerd[1618]: time="2024-11-13T08:33:13.052880531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 13 08:33:13.056783 containerd[1618]: time="2024-11-13T08:33:13.056737600Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 13 08:33:13.060910 containerd[1618]: time="2024-11-13T08:33:13.060764053Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:33:13.128662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989085384.mount: Deactivated successfully. Nov 13 08:33:13.136261 containerd[1618]: time="2024-11-13T08:33:13.136054040Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\"" Nov 13 08:33:13.137268 containerd[1618]: time="2024-11-13T08:33:13.137075699Z" level=info msg="StartContainer for \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\"" Nov 13 08:33:13.289313 containerd[1618]: time="2024-11-13T08:33:13.289239404Z" level=info msg="StartContainer for \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\" returns successfully" Nov 13 08:33:13.574929 containerd[1618]: time="2024-11-13T08:33:13.562309908Z" level=info msg="shim disconnected" id=ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601 namespace=k8s.io Nov 13 08:33:13.575384 containerd[1618]: time="2024-11-13T08:33:13.574943507Z" level=warning msg="cleaning up after shim disconnected" id=ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601 namespace=k8s.io Nov 13 08:33:13.575384 containerd[1618]: time="2024-11-13T08:33:13.574985024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:33:13.828861 kubelet[2839]: E1113 08:33:13.827004 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:13.832765 containerd[1618]: time="2024-11-13T08:33:13.832379822Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:33:13.858072 containerd[1618]: time="2024-11-13T08:33:13.857876753Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\"" Nov 13 08:33:13.861645 containerd[1618]: time="2024-11-13T08:33:13.861504724Z" level=info msg="StartContainer for \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\"" Nov 13 08:33:13.866067 kubelet[2839]: I1113 08:33:13.865710 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mggnp" podStartSLOduration=12.865653673 podStartE2EDuration="12.865653673s" podCreationTimestamp="2024-11-13 08:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:33:03.80720725 +0000 UTC m=+15.342759322" watchObservedRunningTime="2024-11-13 08:33:13.865653673 +0000 UTC m=+25.401205745" Nov 13 08:33:13.967219 containerd[1618]: time="2024-11-13T08:33:13.966035801Z" level=info msg="StartContainer for \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\" returns successfully" Nov 13 08:33:13.989517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:33:13.989977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:33:13.990134 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:33:14.001612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:33:14.048076 containerd[1618]: time="2024-11-13T08:33:14.047099919Z" level=info msg="shim disconnected" id=e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02 namespace=k8s.io Nov 13 08:33:14.048076 containerd[1618]: time="2024-11-13T08:33:14.047836507Z" level=warning msg="cleaning up after shim disconnected" id=e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02 namespace=k8s.io Nov 13 08:33:14.048076 containerd[1618]: time="2024-11-13T08:33:14.047860041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:33:14.055879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:33:14.125159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601-rootfs.mount: Deactivated successfully. Nov 13 08:33:14.834122 kubelet[2839]: E1113 08:33:14.833165 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:14.853606 containerd[1618]: time="2024-11-13T08:33:14.853331250Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:33:14.929082 containerd[1618]: time="2024-11-13T08:33:14.929008209Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\"" Nov 13 08:33:14.930591 containerd[1618]: time="2024-11-13T08:33:14.930538772Z" level=info msg="StartContainer for \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\"" Nov 13 08:33:15.037804 containerd[1618]: time="2024-11-13T08:33:15.037657241Z" level=info msg="StartContainer for \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\" returns successfully" Nov 13 08:33:15.076571 containerd[1618]: time="2024-11-13T08:33:15.076494714Z" level=info msg="shim disconnected" id=ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb namespace=k8s.io Nov 13 08:33:15.076571 containerd[1618]: time="2024-11-13T08:33:15.076559658Z" level=warning msg="cleaning up after shim disconnected" id=ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb namespace=k8s.io Nov 13 08:33:15.076571 containerd[1618]: time="2024-11-13T08:33:15.076569202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:33:15.097068 containerd[1618]: time="2024-11-13T08:33:15.096835891Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:33:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:33:15.124712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb-rootfs.mount: Deactivated successfully. Nov 13 08:33:15.598382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307460474.mount: Deactivated successfully. Nov 13 08:33:15.843687 kubelet[2839]: E1113 08:33:15.841464 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:15.860172 containerd[1618]: time="2024-11-13T08:33:15.856430175Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:33:15.912752 containerd[1618]: time="2024-11-13T08:33:15.912427244Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\"" Nov 13 08:33:15.917192 containerd[1618]: time="2024-11-13T08:33:15.916901206Z" level=info msg="StartContainer for \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\"" Nov 13 08:33:16.066555 containerd[1618]: time="2024-11-13T08:33:16.065441009Z" level=info msg="StartContainer for \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\" returns successfully" Nov 13 08:33:16.163561 containerd[1618]: time="2024-11-13T08:33:16.163114992Z" level=info msg="shim disconnected" id=8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526 namespace=k8s.io Nov 13 08:33:16.163561 containerd[1618]: time="2024-11-13T08:33:16.163206457Z" level=warning msg="cleaning up after shim disconnected" id=8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526 namespace=k8s.io Nov 13 08:33:16.163561 containerd[1618]: time="2024-11-13T08:33:16.163220380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:33:16.194169 containerd[1618]: time="2024-11-13T08:33:16.193633637Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:33:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:33:16.557780 containerd[1618]: time="2024-11-13T08:33:16.556508231Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:33:16.559796 containerd[1618]: time="2024-11-13T08:33:16.559700364Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Nov 13 08:33:16.562058 containerd[1618]: time="2024-11-13T08:33:16.561013153Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:33:16.564469 containerd[1618]: time="2024-11-13T08:33:16.564397784Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.507336203s" Nov 13 08:33:16.564736 containerd[1618]: time="2024-11-13T08:33:16.564709038Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 13 08:33:16.570117 containerd[1618]: time="2024-11-13T08:33:16.569916654Z" level=info msg="CreateContainer within sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 13 08:33:16.591902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544207020.mount: Deactivated successfully. Nov 13 08:33:16.596346 containerd[1618]: time="2024-11-13T08:33:16.596277379Z" level=info msg="CreateContainer within sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\"" Nov 13 08:33:16.598318 containerd[1618]: time="2024-11-13T08:33:16.598258519Z" level=info msg="StartContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\"" Nov 13 08:33:16.683551 containerd[1618]: time="2024-11-13T08:33:16.683484522Z" level=info msg="StartContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" returns successfully" Nov 13 08:33:16.851639 kubelet[2839]: E1113 08:33:16.851414 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:16.864640 kubelet[2839]: E1113 08:33:16.864563 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:16.874791 containerd[1618]: time="2024-11-13T08:33:16.874451588Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:33:16.947515 containerd[1618]: time="2024-11-13T08:33:16.947404867Z" level=info msg="CreateContainer within sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\"" Nov 13 08:33:16.948748 containerd[1618]: time="2024-11-13T08:33:16.948443218Z" level=info msg="StartContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\"" Nov 13 08:33:16.975592 kubelet[2839]: I1113 08:33:16.974555 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-688kk" podStartSLOduration=1.313564744 podStartE2EDuration="14.974505331s" podCreationTimestamp="2024-11-13 08:33:02 +0000 UTC" firstStartedPulling="2024-11-13 08:33:02.904210195 +0000 UTC m=+14.439762244" lastFinishedPulling="2024-11-13 08:33:16.565150783 +0000 UTC m=+28.100702831" observedRunningTime="2024-11-13 08:33:16.881768216 +0000 UTC m=+28.417320286" watchObservedRunningTime="2024-11-13 08:33:16.974505331 +0000 UTC m=+28.510057403" Nov 13 08:33:17.128744 systemd[1]: run-containerd-runc-k8s.io-ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269-runc.LUiB3s.mount: Deactivated successfully. Nov 13 08:33:17.150015 containerd[1618]: time="2024-11-13T08:33:17.149122378Z" level=info msg="StartContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" returns successfully" Nov 13 08:33:17.655084 kubelet[2839]: I1113 08:33:17.652636 2839 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 13 08:33:17.794881 kubelet[2839]: I1113 08:33:17.794121 2839 topology_manager.go:215] "Topology Admit Handler" podUID="56737dc8-3b8c-4a8d-b157-6b37b369c2b3" podNamespace="kube-system" podName="coredns-76f75df574-wp5nq" Nov 13 08:33:17.814000 kubelet[2839]: I1113 08:33:17.807760 2839 topology_manager.go:215] "Topology Admit Handler" podUID="34a9a34a-cded-4279-b18e-0a8812beb122" podNamespace="kube-system" podName="coredns-76f75df574-ktxpp" Nov 13 08:33:17.820918 kubelet[2839]: I1113 08:33:17.820863 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56737dc8-3b8c-4a8d-b157-6b37b369c2b3-config-volume\") pod \"coredns-76f75df574-wp5nq\" (UID: \"56737dc8-3b8c-4a8d-b157-6b37b369c2b3\") " pod="kube-system/coredns-76f75df574-wp5nq" Nov 13 08:33:17.820918 kubelet[2839]: I1113 08:33:17.820919 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9wz\" (UniqueName: \"kubernetes.io/projected/56737dc8-3b8c-4a8d-b157-6b37b369c2b3-kube-api-access-9r9wz\") pod \"coredns-76f75df574-wp5nq\" (UID: \"56737dc8-3b8c-4a8d-b157-6b37b369c2b3\") " pod="kube-system/coredns-76f75df574-wp5nq" Nov 13 08:33:17.824989 kubelet[2839]: I1113 08:33:17.822652 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nxz\" (UniqueName: \"kubernetes.io/projected/34a9a34a-cded-4279-b18e-0a8812beb122-kube-api-access-75nxz\") pod \"coredns-76f75df574-ktxpp\" (UID: \"34a9a34a-cded-4279-b18e-0a8812beb122\") " pod="kube-system/coredns-76f75df574-ktxpp" Nov 13 08:33:17.824989 kubelet[2839]: I1113 08:33:17.824194 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a9a34a-cded-4279-b18e-0a8812beb122-config-volume\") pod \"coredns-76f75df574-ktxpp\" (UID: \"34a9a34a-cded-4279-b18e-0a8812beb122\") " pod="kube-system/coredns-76f75df574-ktxpp" Nov 13 08:33:17.886053 kubelet[2839]: E1113 08:33:17.883066 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:17.889005 kubelet[2839]: E1113 08:33:17.883357 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:18.100062 kubelet[2839]: I1113 08:33:18.096083 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rflwn" podStartSLOduration=6.66356096 podStartE2EDuration="17.096035961s" podCreationTimestamp="2024-11-13 08:33:01 +0000 UTC" firstStartedPulling="2024-11-13 08:33:02.621552413 +0000 UTC m=+14.157104459" lastFinishedPulling="2024-11-13 08:33:13.054027397 +0000 UTC m=+24.589579460" observedRunningTime="2024-11-13 08:33:18.095736944 +0000 UTC m=+29.631289099" watchObservedRunningTime="2024-11-13 08:33:18.096035961 +0000 UTC m=+29.631588146" Nov 13 08:33:18.124878 kubelet[2839]: E1113 08:33:18.124577 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:18.128029 containerd[1618]: time="2024-11-13T08:33:18.127225095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wp5nq,Uid:56737dc8-3b8c-4a8d-b157-6b37b369c2b3,Namespace:kube-system,Attempt:0,}" Nov 13 08:33:18.142854 kubelet[2839]: E1113 08:33:18.142333 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:18.146608 containerd[1618]: time="2024-11-13T08:33:18.146549978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ktxpp,Uid:34a9a34a-cded-4279-b18e-0a8812beb122,Namespace:kube-system,Attempt:0,}" Nov 13 08:33:18.691337 systemd-journald[1143]: Under memory pressure, flushing caches. Nov 13 08:33:18.689110 systemd-resolved[1477]: Under memory pressure, flushing caches. Nov 13 08:33:18.689213 systemd-resolved[1477]: Flushed all caches. Nov 13 08:33:18.886935 kubelet[2839]: E1113 08:33:18.886846 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:19.888755 kubelet[2839]: E1113 08:33:19.888498 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:21.089242 systemd-networkd[1222]: cilium_host: Link UP Nov 13 08:33:21.089395 systemd-networkd[1222]: cilium_net: Link UP Nov 13 08:33:21.089400 systemd-networkd[1222]: cilium_net: Gained carrier Nov 13 08:33:21.089668 systemd-networkd[1222]: cilium_host: Gained carrier Nov 13 08:33:21.303450 systemd-networkd[1222]: cilium_vxlan: Link UP Nov 13 08:33:21.304902 systemd-networkd[1222]: cilium_vxlan: Gained carrier Nov 13 08:33:21.309403 systemd-networkd[1222]: cilium_host: Gained IPv6LL Nov 13 08:33:21.913035 kernel: NET: Registered PF_ALG protocol family Nov 13 08:33:22.081180 systemd-networkd[1222]: cilium_net: Gained IPv6LL Nov 13 08:33:22.785944 systemd-networkd[1222]: cilium_vxlan: Gained IPv6LL Nov 13 08:33:23.293510 systemd-networkd[1222]: lxc_health: Link UP Nov 13 08:33:23.300606 systemd-networkd[1222]: lxc_health: Gained carrier Nov 13 08:33:23.959408 systemd-networkd[1222]: lxc5d7ed7b75b27: Link UP Nov 13 08:33:23.970109 kernel: eth0: renamed from tmp0eef3 Nov 13 08:33:23.980092 systemd-networkd[1222]: lxc5d7ed7b75b27: Gained carrier Nov 13 08:33:24.000755 systemd-networkd[1222]: lxc93917e43dd41: Link UP Nov 13 08:33:24.016005 kernel: eth0: renamed from tmp1c3f1 Nov 13 08:33:24.024688 systemd-networkd[1222]: lxc93917e43dd41: Gained carrier Nov 13 08:33:24.311096 kubelet[2839]: E1113 08:33:24.310390 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:24.705261 systemd-networkd[1222]: lxc_health: Gained IPv6LL Nov 13 08:33:24.912839 kubelet[2839]: E1113 08:33:24.912556 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:25.153250 systemd-networkd[1222]: lxc5d7ed7b75b27: Gained IPv6LL Nov 13 08:33:25.281902 systemd-networkd[1222]: lxc93917e43dd41: Gained IPv6LL Nov 13 08:33:25.912863 kubelet[2839]: E1113 08:33:25.912632 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:30.609182 containerd[1618]: time="2024-11-13T08:33:30.608930635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:33:30.611975 containerd[1618]: time="2024-11-13T08:33:30.611712989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:33:30.612342 containerd[1618]: time="2024-11-13T08:33:30.612288173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:30.612867 containerd[1618]: time="2024-11-13T08:33:30.612787451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:30.650723 systemd[1]: run-containerd-runc-k8s.io-0eef3d66482937f86fda903e649868fca6b2e3250e189c4ed339bdd6ef38faae-runc.f1nnPs.mount: Deactivated successfully. Nov 13 08:33:30.723841 containerd[1618]: time="2024-11-13T08:33:30.723378053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:33:30.723841 containerd[1618]: time="2024-11-13T08:33:30.723470949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:33:30.723841 containerd[1618]: time="2024-11-13T08:33:30.723497243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:30.723841 containerd[1618]: time="2024-11-13T08:33:30.723630915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:33:30.756109 containerd[1618]: time="2024-11-13T08:33:30.753653396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wp5nq,Uid:56737dc8-3b8c-4a8d-b157-6b37b369c2b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eef3d66482937f86fda903e649868fca6b2e3250e189c4ed339bdd6ef38faae\"" Nov 13 08:33:30.759499 kubelet[2839]: E1113 08:33:30.758574 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:30.766761 containerd[1618]: time="2024-11-13T08:33:30.766472747Z" level=info msg="CreateContainer within sandbox \"0eef3d66482937f86fda903e649868fca6b2e3250e189c4ed339bdd6ef38faae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:33:30.818582 containerd[1618]: time="2024-11-13T08:33:30.817891884Z" level=info msg="CreateContainer within sandbox \"0eef3d66482937f86fda903e649868fca6b2e3250e189c4ed339bdd6ef38faae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf456b383f2acbb0ce069def28895fc471be778834b77ac32e38f20b7a1028be\"" Nov 13 08:33:30.822580 containerd[1618]: time="2024-11-13T08:33:30.821236027Z" level=info msg="StartContainer for \"bf456b383f2acbb0ce069def28895fc471be778834b77ac32e38f20b7a1028be\"" Nov 13 08:33:31.007333 containerd[1618]: time="2024-11-13T08:33:31.006323626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ktxpp,Uid:34a9a34a-cded-4279-b18e-0a8812beb122,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c3f17402e15ad3191036b54cea039f126e858a36b1633931a7020c9899a1536\"" Nov 13 08:33:31.009733 kubelet[2839]: E1113 08:33:31.009631 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:31.040783 containerd[1618]: time="2024-11-13T08:33:31.040632510Z" level=info msg="StartContainer for \"bf456b383f2acbb0ce069def28895fc471be778834b77ac32e38f20b7a1028be\" returns successfully" Nov 13 08:33:31.045309 containerd[1618]: time="2024-11-13T08:33:31.044281236Z" level=info msg="CreateContainer within sandbox \"1c3f17402e15ad3191036b54cea039f126e858a36b1633931a7020c9899a1536\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:33:31.073098 containerd[1618]: time="2024-11-13T08:33:31.073038821Z" level=info msg="CreateContainer within sandbox \"1c3f17402e15ad3191036b54cea039f126e858a36b1633931a7020c9899a1536\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc3d009a891384d00cd4733036e3f638273c00c67557268c59c4e260b2c2a23b\"" Nov 13 08:33:31.074427 containerd[1618]: time="2024-11-13T08:33:31.074016048Z" level=info msg="StartContainer for \"dc3d009a891384d00cd4733036e3f638273c00c67557268c59c4e260b2c2a23b\"" Nov 13 08:33:31.228821 containerd[1618]: time="2024-11-13T08:33:31.228548385Z" level=info msg="StartContainer for \"dc3d009a891384d00cd4733036e3f638273c00c67557268c59c4e260b2c2a23b\" returns successfully" Nov 13 08:33:31.622124 systemd[1]: run-containerd-runc-k8s.io-1c3f17402e15ad3191036b54cea039f126e858a36b1633931a7020c9899a1536-runc.7AsQHR.mount: Deactivated successfully. Nov 13 08:33:32.033233 kubelet[2839]: E1113 08:33:32.033180 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:32.039432 kubelet[2839]: E1113 08:33:32.039225 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:32.106578 kubelet[2839]: I1113 08:33:32.105796 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wp5nq" podStartSLOduration=30.105717673 podStartE2EDuration="30.105717673s" podCreationTimestamp="2024-11-13 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:33:32.070788754 +0000 UTC m=+43.606340840" watchObservedRunningTime="2024-11-13 08:33:32.105717673 +0000 UTC m=+43.641269752" Nov 13 08:33:32.155975 kubelet[2839]: I1113 08:33:32.154111 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ktxpp" podStartSLOduration=30.154047328 podStartE2EDuration="30.154047328s" podCreationTimestamp="2024-11-13 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:33:32.153301876 +0000 UTC m=+43.688853950" watchObservedRunningTime="2024-11-13 08:33:32.154047328 +0000 UTC m=+43.689599405" Nov 13 08:33:33.040431 kubelet[2839]: E1113 08:33:33.040376 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:33.040982 kubelet[2839]: E1113 08:33:33.040939 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:34.043826 kubelet[2839]: E1113 08:33:34.043599 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:34.043826 kubelet[2839]: E1113 08:33:34.043714 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:37.074282 systemd[1]: Started sshd@7-159.223.193.8:22-139.178.89.65:47762.service - OpenSSH per-connection server daemon (139.178.89.65:47762). Nov 13 08:33:37.163065 sshd[4220]: Accepted publickey for core from 139.178.89.65 port 47762 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:37.166520 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:37.174004 systemd-logind[1597]: New session 8 of user core. Nov 13 08:33:37.183622 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 08:33:37.835690 sshd[4223]: Connection closed by 139.178.89.65 port 47762 Nov 13 08:33:37.836554 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:37.841227 systemd[1]: sshd@7-159.223.193.8:22-139.178.89.65:47762.service: Deactivated successfully. Nov 13 08:33:37.847500 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 08:33:37.849354 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Nov 13 08:33:37.850923 systemd-logind[1597]: Removed session 8. Nov 13 08:33:42.848470 systemd[1]: Started sshd@8-159.223.193.8:22-139.178.89.65:47778.service - OpenSSH per-connection server daemon (139.178.89.65:47778). Nov 13 08:33:42.916434 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 47778 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:42.918837 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:42.927457 systemd-logind[1597]: New session 9 of user core. Nov 13 08:33:42.933595 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 08:33:43.124245 sshd[4239]: Connection closed by 139.178.89.65 port 47778 Nov 13 08:33:43.125495 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:43.138215 systemd[1]: sshd@8-159.223.193.8:22-139.178.89.65:47778.service: Deactivated successfully. Nov 13 08:33:43.144354 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 08:33:43.146248 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Nov 13 08:33:43.148827 systemd-logind[1597]: Removed session 9. Nov 13 08:33:48.139483 systemd[1]: Started sshd@9-159.223.193.8:22-139.178.89.65:56730.service - OpenSSH per-connection server daemon (139.178.89.65:56730). Nov 13 08:33:48.216002 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 56730 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:48.217724 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:48.225162 systemd-logind[1597]: New session 10 of user core. Nov 13 08:33:48.231457 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 08:33:48.412249 sshd[4253]: Connection closed by 139.178.89.65 port 56730 Nov 13 08:33:48.414282 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:48.421855 systemd[1]: sshd@9-159.223.193.8:22-139.178.89.65:56730.service: Deactivated successfully. Nov 13 08:33:48.426671 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 08:33:48.427600 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Nov 13 08:33:48.429495 systemd-logind[1597]: Removed session 10. Nov 13 08:33:52.664575 kubelet[2839]: E1113 08:33:52.664028 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:33:53.430656 systemd[1]: Started sshd@10-159.223.193.8:22-139.178.89.65:56732.service - OpenSSH per-connection server daemon (139.178.89.65:56732). Nov 13 08:33:53.502895 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 56732 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:53.505188 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:53.513911 systemd-logind[1597]: New session 11 of user core. Nov 13 08:33:53.521585 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 08:33:53.697088 sshd[4270]: Connection closed by 139.178.89.65 port 56732 Nov 13 08:33:53.697259 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:53.709628 systemd[1]: Started sshd@11-159.223.193.8:22-139.178.89.65:56740.service - OpenSSH per-connection server daemon (139.178.89.65:56740). Nov 13 08:33:53.710522 systemd[1]: sshd@10-159.223.193.8:22-139.178.89.65:56732.service: Deactivated successfully. Nov 13 08:33:53.725181 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 08:33:53.729791 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Nov 13 08:33:53.732551 systemd-logind[1597]: Removed session 11. Nov 13 08:33:53.783991 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 56740 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:53.790062 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:53.802124 systemd-logind[1597]: New session 12 of user core. Nov 13 08:33:53.813814 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 08:33:54.059171 sshd[4285]: Connection closed by 139.178.89.65 port 56740 Nov 13 08:33:54.065254 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:54.076811 systemd[1]: Started sshd@12-159.223.193.8:22-139.178.89.65:56752.service - OpenSSH per-connection server daemon (139.178.89.65:56752). Nov 13 08:33:54.077500 systemd[1]: sshd@11-159.223.193.8:22-139.178.89.65:56740.service: Deactivated successfully. Nov 13 08:33:54.090591 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 08:33:54.112816 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Nov 13 08:33:54.122133 systemd-logind[1597]: Removed session 12. Nov 13 08:33:54.189526 sshd[4290]: Accepted publickey for core from 139.178.89.65 port 56752 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:54.193114 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:54.211022 systemd-logind[1597]: New session 13 of user core. Nov 13 08:33:54.217763 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 08:33:54.418473 sshd[4296]: Connection closed by 139.178.89.65 port 56752 Nov 13 08:33:54.419175 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:54.429456 systemd[1]: sshd@12-159.223.193.8:22-139.178.89.65:56752.service: Deactivated successfully. Nov 13 08:33:54.430846 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Nov 13 08:33:54.436660 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 08:33:54.440370 systemd-logind[1597]: Removed session 13. Nov 13 08:33:59.433467 systemd[1]: Started sshd@13-159.223.193.8:22-139.178.89.65:52096.service - OpenSSH per-connection server daemon (139.178.89.65:52096). Nov 13 08:33:59.512493 sshd[4308]: Accepted publickey for core from 139.178.89.65 port 52096 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:33:59.515635 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:33:59.523377 systemd-logind[1597]: New session 14 of user core. Nov 13 08:33:59.533395 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 08:33:59.693310 sshd[4311]: Connection closed by 139.178.89.65 port 52096 Nov 13 08:33:59.695004 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Nov 13 08:33:59.699811 systemd[1]: sshd@13-159.223.193.8:22-139.178.89.65:52096.service: Deactivated successfully. Nov 13 08:33:59.707766 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 08:33:59.710088 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Nov 13 08:33:59.711650 systemd-logind[1597]: Removed session 14. Nov 13 08:34:04.707515 systemd[1]: Started sshd@14-159.223.193.8:22-139.178.89.65:52112.service - OpenSSH per-connection server daemon (139.178.89.65:52112). Nov 13 08:34:04.768300 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 52112 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:04.770984 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:04.777706 systemd-logind[1597]: New session 15 of user core. Nov 13 08:34:04.785495 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 08:34:04.951641 sshd[4328]: Connection closed by 139.178.89.65 port 52112 Nov 13 08:34:04.952202 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:04.957514 systemd[1]: sshd@14-159.223.193.8:22-139.178.89.65:52112.service: Deactivated successfully. Nov 13 08:34:04.965182 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 08:34:04.965591 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Nov 13 08:34:04.968986 systemd-logind[1597]: Removed session 15. Nov 13 08:34:09.665044 kubelet[2839]: E1113 08:34:09.664240 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:09.665044 kubelet[2839]: E1113 08:34:09.664517 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:09.962801 systemd[1]: Started sshd@15-159.223.193.8:22-139.178.89.65:53554.service - OpenSSH per-connection server daemon (139.178.89.65:53554). Nov 13 08:34:10.036077 sshd[4339]: Accepted publickey for core from 139.178.89.65 port 53554 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:10.038932 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:10.046286 systemd-logind[1597]: New session 16 of user core. Nov 13 08:34:10.053508 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 08:34:10.246631 sshd[4342]: Connection closed by 139.178.89.65 port 53554 Nov 13 08:34:10.248267 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:10.260746 systemd[1]: Started sshd@16-159.223.193.8:22-139.178.89.65:53560.service - OpenSSH per-connection server daemon (139.178.89.65:53560). Nov 13 08:34:10.263711 systemd[1]: sshd@15-159.223.193.8:22-139.178.89.65:53554.service: Deactivated successfully. Nov 13 08:34:10.275845 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 08:34:10.278369 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Nov 13 08:34:10.284788 systemd-logind[1597]: Removed session 16. Nov 13 08:34:10.353981 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 53560 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:10.356536 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:10.364301 systemd-logind[1597]: New session 17 of user core. Nov 13 08:34:10.371560 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 08:34:10.735355 sshd[4356]: Connection closed by 139.178.89.65 port 53560 Nov 13 08:34:10.736376 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:10.749796 systemd[1]: Started sshd@17-159.223.193.8:22-139.178.89.65:53564.service - OpenSSH per-connection server daemon (139.178.89.65:53564). Nov 13 08:34:10.750694 systemd[1]: sshd@16-159.223.193.8:22-139.178.89.65:53560.service: Deactivated successfully. Nov 13 08:34:10.761401 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 08:34:10.764054 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Nov 13 08:34:10.768541 systemd-logind[1597]: Removed session 17. Nov 13 08:34:10.826236 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 53564 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:10.828633 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:10.843729 systemd-logind[1597]: New session 18 of user core. Nov 13 08:34:10.852545 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 08:34:12.799694 sshd[4368]: Connection closed by 139.178.89.65 port 53564 Nov 13 08:34:12.801721 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:12.819407 systemd[1]: Started sshd@18-159.223.193.8:22-139.178.89.65:53570.service - OpenSSH per-connection server daemon (139.178.89.65:53570). Nov 13 08:34:12.819931 systemd[1]: sshd@17-159.223.193.8:22-139.178.89.65:53564.service: Deactivated successfully. Nov 13 08:34:12.830314 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 08:34:12.839505 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Nov 13 08:34:12.851998 systemd-logind[1597]: Removed session 18. Nov 13 08:34:12.945217 sshd[4380]: Accepted publickey for core from 139.178.89.65 port 53570 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:12.947172 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:12.954987 systemd-logind[1597]: New session 19 of user core. Nov 13 08:34:12.963542 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 08:34:13.418019 sshd[4387]: Connection closed by 139.178.89.65 port 53570 Nov 13 08:34:13.419178 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:13.431531 systemd[1]: Started sshd@19-159.223.193.8:22-139.178.89.65:53572.service - OpenSSH per-connection server daemon (139.178.89.65:53572). Nov 13 08:34:13.432275 systemd[1]: sshd@18-159.223.193.8:22-139.178.89.65:53570.service: Deactivated successfully. Nov 13 08:34:13.444622 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 08:34:13.449107 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Nov 13 08:34:13.451896 systemd-logind[1597]: Removed session 19. Nov 13 08:34:13.512616 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 53572 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:13.515225 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:13.523119 systemd-logind[1597]: New session 20 of user core. Nov 13 08:34:13.528731 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 08:34:13.701540 sshd[4399]: Connection closed by 139.178.89.65 port 53572 Nov 13 08:34:13.703092 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:13.712971 systemd[1]: sshd@19-159.223.193.8:22-139.178.89.65:53572.service: Deactivated successfully. Nov 13 08:34:13.719782 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 08:34:13.721863 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Nov 13 08:34:13.723654 systemd-logind[1597]: Removed session 20. Nov 13 08:34:18.716436 systemd[1]: Started sshd@20-159.223.193.8:22-139.178.89.65:40096.service - OpenSSH per-connection server daemon (139.178.89.65:40096). Nov 13 08:34:18.768751 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 40096 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:18.770800 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:18.776625 systemd-logind[1597]: New session 21 of user core. Nov 13 08:34:18.782449 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 08:34:18.939148 sshd[4413]: Connection closed by 139.178.89.65 port 40096 Nov 13 08:34:18.939856 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:18.945251 systemd[1]: sshd@20-159.223.193.8:22-139.178.89.65:40096.service: Deactivated successfully. Nov 13 08:34:18.951099 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 08:34:18.952917 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Nov 13 08:34:18.954421 systemd-logind[1597]: Removed session 21. Nov 13 08:34:20.664689 kubelet[2839]: E1113 08:34:20.664141 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:21.663986 kubelet[2839]: E1113 08:34:21.663910 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:23.952349 systemd[1]: Started sshd@21-159.223.193.8:22-139.178.89.65:40106.service - OpenSSH per-connection server daemon (139.178.89.65:40106). Nov 13 08:34:24.010157 sshd[4427]: Accepted publickey for core from 139.178.89.65 port 40106 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:24.012451 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:24.021650 systemd-logind[1597]: New session 22 of user core. Nov 13 08:34:24.030122 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 08:34:24.205102 sshd[4430]: Connection closed by 139.178.89.65 port 40106 Nov 13 08:34:24.206602 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:24.213680 systemd[1]: sshd@21-159.223.193.8:22-139.178.89.65:40106.service: Deactivated successfully. Nov 13 08:34:24.213742 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. Nov 13 08:34:24.218613 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 08:34:24.220915 systemd-logind[1597]: Removed session 22. Nov 13 08:34:29.221517 systemd[1]: Started sshd@22-159.223.193.8:22-139.178.89.65:39096.service - OpenSSH per-connection server daemon (139.178.89.65:39096). Nov 13 08:34:29.320997 sshd[4441]: Accepted publickey for core from 139.178.89.65 port 39096 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:29.323634 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:29.332179 systemd-logind[1597]: New session 23 of user core. Nov 13 08:34:29.335393 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 08:34:29.502346 sshd[4444]: Connection closed by 139.178.89.65 port 39096 Nov 13 08:34:29.503528 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:29.509153 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. Nov 13 08:34:29.509490 systemd[1]: sshd@22-159.223.193.8:22-139.178.89.65:39096.service: Deactivated successfully. Nov 13 08:34:29.516339 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 08:34:29.517479 systemd-logind[1597]: Removed session 23. Nov 13 08:34:34.515505 systemd[1]: Started sshd@23-159.223.193.8:22-139.178.89.65:39102.service - OpenSSH per-connection server daemon (139.178.89.65:39102). Nov 13 08:34:34.573540 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 39102 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:34.576460 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:34.584233 systemd-logind[1597]: New session 24 of user core. Nov 13 08:34:34.589504 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 08:34:34.747281 sshd[4460]: Connection closed by 139.178.89.65 port 39102 Nov 13 08:34:34.749258 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:34.757408 systemd[1]: Started sshd@24-159.223.193.8:22-139.178.89.65:39112.service - OpenSSH per-connection server daemon (139.178.89.65:39112). Nov 13 08:34:34.758657 systemd[1]: sshd@23-159.223.193.8:22-139.178.89.65:39102.service: Deactivated successfully. Nov 13 08:34:34.769408 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 08:34:34.772421 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. Nov 13 08:34:34.776769 systemd-logind[1597]: Removed session 24. Nov 13 08:34:34.826198 sshd[4468]: Accepted publickey for core from 139.178.89.65 port 39112 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:34.828561 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:34.835251 systemd-logind[1597]: New session 25 of user core. Nov 13 08:34:34.845501 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 08:34:36.521115 containerd[1618]: time="2024-11-13T08:34:36.520321638Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:34:36.623534 containerd[1618]: time="2024-11-13T08:34:36.623473419Z" level=info msg="StopContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" with timeout 2 (s)" Nov 13 08:34:36.624339 containerd[1618]: time="2024-11-13T08:34:36.623483381Z" level=info msg="StopContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" with timeout 30 (s)" Nov 13 08:34:36.624339 containerd[1618]: time="2024-11-13T08:34:36.624229585Z" level=info msg="Stop container \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" with signal terminated" Nov 13 08:34:36.625631 containerd[1618]: time="2024-11-13T08:34:36.625443824Z" level=info msg="Stop container \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" with signal terminated" Nov 13 08:34:36.636708 systemd-networkd[1222]: lxc_health: Link DOWN Nov 13 08:34:36.636725 systemd-networkd[1222]: lxc_health: Lost carrier Nov 13 08:34:36.717800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a-rootfs.mount: Deactivated successfully. Nov 13 08:34:36.732813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269-rootfs.mount: Deactivated successfully. Nov 13 08:34:36.740574 containerd[1618]: time="2024-11-13T08:34:36.740247273Z" level=info msg="shim disconnected" id=89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a namespace=k8s.io Nov 13 08:34:36.740574 containerd[1618]: time="2024-11-13T08:34:36.740340867Z" level=warning msg="cleaning up after shim disconnected" id=89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a namespace=k8s.io Nov 13 08:34:36.740574 containerd[1618]: time="2024-11-13T08:34:36.740354708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:36.749418 containerd[1618]: time="2024-11-13T08:34:36.749344158Z" level=info msg="shim disconnected" id=ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269 namespace=k8s.io Nov 13 08:34:36.749900 containerd[1618]: time="2024-11-13T08:34:36.749657443Z" level=warning msg="cleaning up after shim disconnected" id=ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269 namespace=k8s.io Nov 13 08:34:36.749900 containerd[1618]: time="2024-11-13T08:34:36.749677488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:36.781476 containerd[1618]: time="2024-11-13T08:34:36.781323911Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:34:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:34:36.783096 containerd[1618]: time="2024-11-13T08:34:36.782901900Z" level=info msg="StopContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" returns successfully" Nov 13 08:34:36.786478 containerd[1618]: time="2024-11-13T08:34:36.786420113Z" level=info msg="StopContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" returns successfully" Nov 13 08:34:36.789142 containerd[1618]: time="2024-11-13T08:34:36.788830576Z" level=info msg="StopPodSandbox for \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\"" Nov 13 08:34:36.789142 containerd[1618]: time="2024-11-13T08:34:36.789095241Z" level=info msg="StopPodSandbox for \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\"" Nov 13 08:34:36.793230 containerd[1618]: time="2024-11-13T08:34:36.790573106Z" level=info msg="Container to stop \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.798212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9-shm.mount: Deactivated successfully. Nov 13 08:34:36.799857 containerd[1618]: time="2024-11-13T08:34:36.790532901Z" level=info msg="Container to stop \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.803267 containerd[1618]: time="2024-11-13T08:34:36.799938534Z" level=info msg="Container to stop \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.803267 containerd[1618]: time="2024-11-13T08:34:36.802474393Z" level=info msg="Container to stop \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.803267 containerd[1618]: time="2024-11-13T08:34:36.802504611Z" level=info msg="Container to stop \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.803267 containerd[1618]: time="2024-11-13T08:34:36.802518121Z" level=info msg="Container to stop \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:34:36.807063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113-shm.mount: Deactivated successfully. Nov 13 08:34:36.869132 containerd[1618]: time="2024-11-13T08:34:36.865705246Z" level=info msg="shim disconnected" id=3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113 namespace=k8s.io Nov 13 08:34:36.870719 containerd[1618]: time="2024-11-13T08:34:36.869673387Z" level=warning msg="cleaning up after shim disconnected" id=3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113 namespace=k8s.io Nov 13 08:34:36.870719 containerd[1618]: time="2024-11-13T08:34:36.870444482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:36.879017 containerd[1618]: time="2024-11-13T08:34:36.878630069Z" level=info msg="shim disconnected" id=b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9 namespace=k8s.io Nov 13 08:34:36.879443 containerd[1618]: time="2024-11-13T08:34:36.879165817Z" level=warning msg="cleaning up after shim disconnected" id=b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9 namespace=k8s.io Nov 13 08:34:36.879443 containerd[1618]: time="2024-11-13T08:34:36.879183944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:36.901372 containerd[1618]: time="2024-11-13T08:34:36.901043408Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:34:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:34:36.902970 containerd[1618]: time="2024-11-13T08:34:36.902666582Z" level=info msg="TearDown network for sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" successfully" Nov 13 08:34:36.902970 containerd[1618]: time="2024-11-13T08:34:36.902832984Z" level=info msg="StopPodSandbox for \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" returns successfully" Nov 13 08:34:36.907068 containerd[1618]: time="2024-11-13T08:34:36.906822615Z" level=info msg="TearDown network for sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" successfully" Nov 13 08:34:36.907068 containerd[1618]: time="2024-11-13T08:34:36.906878229Z" level=info msg="StopPodSandbox for \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" returns successfully" Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969181 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-kernel\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969246 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-run\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969288 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvprc\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-kube-api-access-vvprc\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969308 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-bpf-maps\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969326 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-xtables-lock\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.970943 kubelet[2839]: I1113 08:34:36.969342 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-etc-cni-netd\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.969364 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hubble-tls\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.969384 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-lib-modules\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.969434 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hostproc\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.969474 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-net\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.970968 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb2b0e36-7176-4cc7-8c51-4111fcd158de-cilium-config-path\") pod \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\" (UID: \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\") " Nov 13 08:34:36.971857 kubelet[2839]: I1113 08:34:36.971043 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cni-path\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.974179 kubelet[2839]: I1113 08:34:36.971087 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-config-path\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.974179 kubelet[2839]: I1113 08:34:36.971116 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-clustermesh-secrets\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.974179 kubelet[2839]: I1113 08:34:36.971138 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpdt5\" (UniqueName: \"kubernetes.io/projected/eb2b0e36-7176-4cc7-8c51-4111fcd158de-kube-api-access-dpdt5\") pod \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\" (UID: \"eb2b0e36-7176-4cc7-8c51-4111fcd158de\") " Nov 13 08:34:36.974179 kubelet[2839]: I1113 08:34:36.971157 2839 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-cgroup\") pod \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\" (UID: \"a50798f5-9e4a-48fb-994c-1d7c4cb873d2\") " Nov 13 08:34:36.974179 kubelet[2839]: I1113 08:34:36.970101 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.974458 kubelet[2839]: I1113 08:34:36.971868 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.974458 kubelet[2839]: I1113 08:34:36.973242 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.974458 kubelet[2839]: I1113 08:34:36.973309 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.974458 kubelet[2839]: I1113 08:34:36.973356 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.986033 kubelet[2839]: I1113 08:34:36.985639 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:34:36.986033 kubelet[2839]: I1113 08:34:36.985725 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.986033 kubelet[2839]: I1113 08:34:36.985750 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.986033 kubelet[2839]: I1113 08:34:36.985766 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.986706 kubelet[2839]: I1113 08:34:36.986565 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-kube-api-access-vvprc" (OuterVolumeSpecName: "kube-api-access-vvprc") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "kube-api-access-vvprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:34:36.987266 kubelet[2839]: I1113 08:34:36.986901 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.988994 kubelet[2839]: I1113 08:34:36.988898 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2b0e36-7176-4cc7-8c51-4111fcd158de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb2b0e36-7176-4cc7-8c51-4111fcd158de" (UID: "eb2b0e36-7176-4cc7-8c51-4111fcd158de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:34:36.989382 kubelet[2839]: I1113 08:34:36.989101 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:34:36.992732 kubelet[2839]: I1113 08:34:36.992489 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:34:36.996358 kubelet[2839]: I1113 08:34:36.996286 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a50798f5-9e4a-48fb-994c-1d7c4cb873d2" (UID: "a50798f5-9e4a-48fb-994c-1d7c4cb873d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 13 08:34:36.996717 kubelet[2839]: I1113 08:34:36.996653 2839 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2b0e36-7176-4cc7-8c51-4111fcd158de-kube-api-access-dpdt5" (OuterVolumeSpecName: "kube-api-access-dpdt5") pod "eb2b0e36-7176-4cc7-8c51-4111fcd158de" (UID: "eb2b0e36-7176-4cc7-8c51-4111fcd158de"). InnerVolumeSpecName "kube-api-access-dpdt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072186 2839 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-cgroup\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072241 2839 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-kernel\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072254 2839 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-run\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072265 2839 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vvprc\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-kube-api-access-vvprc\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072277 2839 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hubble-tls\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072288 2839 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-bpf-maps\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072299 2839 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-xtables-lock\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072377 kubelet[2839]: I1113 08:34:37.072315 2839 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-etc-cni-netd\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072330 2839 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-lib-modules\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072345 2839 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-hostproc\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072364 2839 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-host-proc-sys-net\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072375 2839 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb2b0e36-7176-4cc7-8c51-4111fcd158de-cilium-config-path\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072385 2839 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cni-path\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072420 2839 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-cilium-config-path\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072430 2839 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a50798f5-9e4a-48fb-994c-1d7c4cb873d2-clustermesh-secrets\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.072840 kubelet[2839]: I1113 08:34:37.072441 2839 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dpdt5\" (UniqueName: \"kubernetes.io/projected/eb2b0e36-7176-4cc7-8c51-4111fcd158de-kube-api-access-dpdt5\") on node \"ci-4152.0.0-e-2bf6127ade\" DevicePath \"\"" Nov 13 08:34:37.236552 kubelet[2839]: I1113 08:34:37.236500 2839 scope.go:117] "RemoveContainer" containerID="ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269" Nov 13 08:34:37.249562 containerd[1618]: time="2024-11-13T08:34:37.248997062Z" level=info msg="RemoveContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\"" Nov 13 08:34:37.258579 containerd[1618]: time="2024-11-13T08:34:37.258480188Z" level=info msg="RemoveContainer for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" returns successfully" Nov 13 08:34:37.264738 kubelet[2839]: I1113 08:34:37.264664 2839 scope.go:117] "RemoveContainer" containerID="ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269" Nov 13 08:34:37.266176 containerd[1618]: time="2024-11-13T08:34:37.266096101Z" level=error msg="ContainerStatus for \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\": not found" Nov 13 08:34:37.292865 kubelet[2839]: E1113 08:34:37.292486 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\": not found" containerID="ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269" Nov 13 08:34:37.316635 kubelet[2839]: I1113 08:34:37.316580 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269"} err="failed to get container status \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca9bc897b1b15b7f5919fbf68fba28af5624b2455437bf0336f4f2e9da516269\": not found" Nov 13 08:34:37.316635 kubelet[2839]: I1113 08:34:37.316633 2839 scope.go:117] "RemoveContainer" containerID="89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a" Nov 13 08:34:37.321414 containerd[1618]: time="2024-11-13T08:34:37.320859423Z" level=info msg="RemoveContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\"" Nov 13 08:34:37.330203 containerd[1618]: time="2024-11-13T08:34:37.329823445Z" level=info msg="RemoveContainer for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" returns successfully" Nov 13 08:34:37.330354 kubelet[2839]: I1113 08:34:37.330216 2839 scope.go:117] "RemoveContainer" containerID="8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526" Nov 13 08:34:37.333460 containerd[1618]: time="2024-11-13T08:34:37.333396246Z" level=info msg="RemoveContainer for \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\"" Nov 13 08:34:37.343751 containerd[1618]: time="2024-11-13T08:34:37.342409827Z" level=info msg="RemoveContainer for \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\" returns successfully" Nov 13 08:34:37.346150 kubelet[2839]: I1113 08:34:37.345437 2839 scope.go:117] "RemoveContainer" containerID="ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb" Nov 13 08:34:37.353092 containerd[1618]: time="2024-11-13T08:34:37.353028570Z" level=info msg="RemoveContainer for \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\"" Nov 13 08:34:37.358828 containerd[1618]: time="2024-11-13T08:34:37.358628922Z" level=info msg="RemoveContainer for \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\" returns successfully" Nov 13 08:34:37.361828 kubelet[2839]: I1113 08:34:37.361781 2839 scope.go:117] "RemoveContainer" containerID="e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02" Nov 13 08:34:37.366646 containerd[1618]: time="2024-11-13T08:34:37.365254529Z" level=info msg="RemoveContainer for \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\"" Nov 13 08:34:37.373905 containerd[1618]: time="2024-11-13T08:34:37.373708845Z" level=info msg="RemoveContainer for \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\" returns successfully" Nov 13 08:34:37.374178 kubelet[2839]: I1113 08:34:37.374120 2839 scope.go:117] "RemoveContainer" containerID="ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601" Nov 13 08:34:37.376231 containerd[1618]: time="2024-11-13T08:34:37.376170870Z" level=info msg="RemoveContainer for \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\"" Nov 13 08:34:37.382557 containerd[1618]: time="2024-11-13T08:34:37.382474362Z" level=info msg="RemoveContainer for \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\" returns successfully" Nov 13 08:34:37.383740 kubelet[2839]: I1113 08:34:37.383099 2839 scope.go:117] "RemoveContainer" containerID="89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a" Nov 13 08:34:37.384043 containerd[1618]: time="2024-11-13T08:34:37.383588209Z" level=error msg="ContainerStatus for \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\": not found" Nov 13 08:34:37.384191 kubelet[2839]: E1113 08:34:37.383859 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\": not found" containerID="89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a" Nov 13 08:34:37.384191 kubelet[2839]: I1113 08:34:37.383902 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a"} err="failed to get container status \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\": rpc error: code = NotFound desc = an error occurred when try to find container \"89d3c5565acf74403a887e75ab8a6f27fd3f163aa3828e20aae25bbc9d66a84a\": not found" Nov 13 08:34:37.384191 kubelet[2839]: I1113 08:34:37.383922 2839 scope.go:117] "RemoveContainer" containerID="8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526" Nov 13 08:34:37.384524 kubelet[2839]: E1113 08:34:37.384356 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\": not found" containerID="8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526" Nov 13 08:34:37.384524 kubelet[2839]: I1113 08:34:37.384393 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526"} err="failed to get container status \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\": not found" Nov 13 08:34:37.384524 kubelet[2839]: I1113 08:34:37.384404 2839 scope.go:117] "RemoveContainer" containerID="ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb" Nov 13 08:34:37.384652 containerd[1618]: time="2024-11-13T08:34:37.384162250Z" level=error msg="ContainerStatus for \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a8e5f6c1c7e6697c105502497bd5ae08b6d7f61d650761936b913a58d585526\": not found" Nov 13 08:34:37.384652 containerd[1618]: time="2024-11-13T08:34:37.384600122Z" level=error msg="ContainerStatus for \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\": not found" Nov 13 08:34:37.384875 kubelet[2839]: E1113 08:34:37.384812 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\": not found" containerID="ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb" Nov 13 08:34:37.384875 kubelet[2839]: I1113 08:34:37.384859 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb"} err="failed to get container status \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee643648f1012d97d973f179d8ca12f2d94025ab4001d74ecbff1ab53cceb6cb\": not found" Nov 13 08:34:37.384875 kubelet[2839]: I1113 08:34:37.384873 2839 scope.go:117] "RemoveContainer" containerID="e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02" Nov 13 08:34:37.385600 kubelet[2839]: E1113 08:34:37.385245 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\": not found" containerID="e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02" Nov 13 08:34:37.385600 kubelet[2839]: I1113 08:34:37.385267 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02"} err="failed to get container status \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\": not found" Nov 13 08:34:37.385600 kubelet[2839]: I1113 08:34:37.385277 2839 scope.go:117] "RemoveContainer" containerID="ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601" Nov 13 08:34:37.386024 containerd[1618]: time="2024-11-13T08:34:37.385085860Z" level=error msg="ContainerStatus for \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9dfd934ceb771625cd2a7bf225a9350c0db5f6cf330d3fb2484af0f80ffdb02\": not found" Nov 13 08:34:37.386527 containerd[1618]: time="2024-11-13T08:34:37.386323430Z" level=error msg="ContainerStatus for \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\": not found" Nov 13 08:34:37.386676 kubelet[2839]: E1113 08:34:37.386643 2839 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\": not found" containerID="ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601" Nov 13 08:34:37.386937 kubelet[2839]: I1113 08:34:37.386884 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601"} err="failed to get container status \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce9765e82478dcfc20246c2c009076632f87395fc98577449993b856e9acd601\": not found" Nov 13 08:34:37.498336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9-rootfs.mount: Deactivated successfully. Nov 13 08:34:37.498547 systemd[1]: var-lib-kubelet-pods-eb2b0e36\x2d7176\x2d4cc7\x2d8c51\x2d4111fcd158de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpdt5.mount: Deactivated successfully. Nov 13 08:34:37.498906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113-rootfs.mount: Deactivated successfully. Nov 13 08:34:37.499118 systemd[1]: var-lib-kubelet-pods-a50798f5\x2d9e4a\x2d48fb\x2d994c\x2d1d7c4cb873d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvprc.mount: Deactivated successfully. Nov 13 08:34:37.499266 systemd[1]: var-lib-kubelet-pods-a50798f5\x2d9e4a\x2d48fb\x2d994c\x2d1d7c4cb873d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 13 08:34:37.499418 systemd[1]: var-lib-kubelet-pods-a50798f5\x2d9e4a\x2d48fb\x2d994c\x2d1d7c4cb873d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 13 08:34:38.391073 sshd[4474]: Connection closed by 139.178.89.65 port 39112 Nov 13 08:34:38.390135 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:38.404729 systemd[1]: Started sshd@25-159.223.193.8:22-139.178.89.65:57760.service - OpenSSH per-connection server daemon (139.178.89.65:57760). Nov 13 08:34:38.406756 systemd[1]: sshd@24-159.223.193.8:22-139.178.89.65:39112.service: Deactivated successfully. Nov 13 08:34:38.417302 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 08:34:38.425068 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. Nov 13 08:34:38.427483 systemd-logind[1597]: Removed session 25. Nov 13 08:34:38.535185 sshd[4636]: Accepted publickey for core from 139.178.89.65 port 57760 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:38.541533 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:38.551664 systemd-logind[1597]: New session 26 of user core. Nov 13 08:34:38.556554 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 08:34:38.667179 kubelet[2839]: I1113 08:34:38.667000 2839 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" path="/var/lib/kubelet/pods/a50798f5-9e4a-48fb-994c-1d7c4cb873d2/volumes" Nov 13 08:34:38.671135 kubelet[2839]: I1113 08:34:38.669819 2839 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eb2b0e36-7176-4cc7-8c51-4111fcd158de" path="/var/lib/kubelet/pods/eb2b0e36-7176-4cc7-8c51-4111fcd158de/volumes" Nov 13 08:34:38.856798 kubelet[2839]: E1113 08:34:38.856720 2839 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 08:34:39.323170 sshd[4642]: Connection closed by 139.178.89.65 port 57760 Nov 13 08:34:39.324199 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:39.338338 systemd[1]: Started sshd@26-159.223.193.8:22-139.178.89.65:57766.service - OpenSSH per-connection server daemon (139.178.89.65:57766). Nov 13 08:34:39.339478 systemd[1]: sshd@25-159.223.193.8:22-139.178.89.65:57760.service: Deactivated successfully. Nov 13 08:34:39.356632 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 08:34:39.367879 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. Nov 13 08:34:39.372893 kubelet[2839]: I1113 08:34:39.371663 2839 topology_manager.go:215] "Topology Admit Handler" podUID="679e3547-2318-4818-af61-a10a508c014c" podNamespace="kube-system" podName="cilium-vdzx7" Nov 13 08:34:39.377217 systemd-logind[1597]: Removed session 26. Nov 13 08:34:39.387059 kubelet[2839]: E1113 08:34:39.386581 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="mount-cgroup" Nov 13 08:34:39.387317 kubelet[2839]: E1113 08:34:39.387294 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="apply-sysctl-overwrites" Nov 13 08:34:39.391152 kubelet[2839]: E1113 08:34:39.390552 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="mount-bpf-fs" Nov 13 08:34:39.391152 kubelet[2839]: E1113 08:34:39.390620 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="clean-cilium-state" Nov 13 08:34:39.391152 kubelet[2839]: E1113 08:34:39.390633 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="cilium-agent" Nov 13 08:34:39.391152 kubelet[2839]: E1113 08:34:39.390664 2839 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb2b0e36-7176-4cc7-8c51-4111fcd158de" containerName="cilium-operator" Nov 13 08:34:39.391152 kubelet[2839]: I1113 08:34:39.390771 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2b0e36-7176-4cc7-8c51-4111fcd158de" containerName="cilium-operator" Nov 13 08:34:39.391152 kubelet[2839]: I1113 08:34:39.390785 2839 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50798f5-9e4a-48fb-994c-1d7c4cb873d2" containerName="cilium-agent" Nov 13 08:34:39.512076 sshd[4648]: Accepted publickey for core from 139.178.89.65 port 57766 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:39.514544 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:39.534479 systemd-logind[1597]: New session 27 of user core. Nov 13 08:34:39.540827 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 13 08:34:39.617455 sshd[4654]: Connection closed by 139.178.89.65 port 57766 Nov 13 08:34:39.620043 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:39.625424 kubelet[2839]: I1113 08:34:39.625368 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/679e3547-2318-4818-af61-a10a508c014c-cilium-config-path\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.628555 systemd[1]: Started sshd@27-159.223.193.8:22-139.178.89.65:57780.service - OpenSSH per-connection server daemon (139.178.89.65:57780). Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631041 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/679e3547-2318-4818-af61-a10a508c014c-hubble-tls\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631160 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-cilium-cgroup\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631215 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-bpf-maps\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631254 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-cni-path\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631298 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-xtables-lock\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.631862 kubelet[2839]: I1113 08:34:39.631338 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/679e3547-2318-4818-af61-a10a508c014c-clustermesh-secrets\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633293 kubelet[2839]: I1113 08:34:39.631389 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-host-proc-sys-kernel\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633293 kubelet[2839]: I1113 08:34:39.631447 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/679e3547-2318-4818-af61-a10a508c014c-cilium-ipsec-secrets\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633293 kubelet[2839]: I1113 08:34:39.631490 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-etc-cni-netd\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633293 kubelet[2839]: I1113 08:34:39.631525 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-host-proc-sys-net\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633293 kubelet[2839]: I1113 08:34:39.631558 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vjwl\" (UniqueName: \"kubernetes.io/projected/679e3547-2318-4818-af61-a10a508c014c-kube-api-access-4vjwl\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633492 kubelet[2839]: I1113 08:34:39.631585 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-hostproc\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633492 kubelet[2839]: I1113 08:34:39.631612 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-cilium-run\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.633492 kubelet[2839]: I1113 08:34:39.631690 2839 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/679e3547-2318-4818-af61-a10a508c014c-lib-modules\") pod \"cilium-vdzx7\" (UID: \"679e3547-2318-4818-af61-a10a508c014c\") " pod="kube-system/cilium-vdzx7" Nov 13 08:34:39.636775 systemd[1]: sshd@26-159.223.193.8:22-139.178.89.65:57766.service: Deactivated successfully. Nov 13 08:34:39.646441 systemd[1]: session-27.scope: Deactivated successfully. Nov 13 08:34:39.648271 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. Nov 13 08:34:39.650552 systemd-logind[1597]: Removed session 27. Nov 13 08:34:39.694209 sshd[4657]: Accepted publickey for core from 139.178.89.65 port 57780 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:34:39.697325 sshd-session[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:34:39.705478 systemd-logind[1597]: New session 28 of user core. Nov 13 08:34:39.715059 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 13 08:34:39.785599 kubelet[2839]: E1113 08:34:39.785280 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:39.792899 containerd[1618]: time="2024-11-13T08:34:39.786026409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdzx7,Uid:679e3547-2318-4818-af61-a10a508c014c,Namespace:kube-system,Attempt:0,}" Nov 13 08:34:39.858980 containerd[1618]: time="2024-11-13T08:34:39.853456175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:34:39.858980 containerd[1618]: time="2024-11-13T08:34:39.853586932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:34:39.858980 containerd[1618]: time="2024-11-13T08:34:39.853611438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:34:39.858980 containerd[1618]: time="2024-11-13T08:34:39.853786182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:34:39.958641 containerd[1618]: time="2024-11-13T08:34:39.958575101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdzx7,Uid:679e3547-2318-4818-af61-a10a508c014c,Namespace:kube-system,Attempt:0,} returns sandbox id \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\"" Nov 13 08:34:39.962718 kubelet[2839]: E1113 08:34:39.961232 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:39.983870 containerd[1618]: time="2024-11-13T08:34:39.983090804Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:34:40.023013 containerd[1618]: time="2024-11-13T08:34:40.022572006Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b2d32b8e4f438167fb6f524d8fa953cac2950873bb2211f9bd393826caf9ca20\"" Nov 13 08:34:40.023809 containerd[1618]: time="2024-11-13T08:34:40.023743411Z" level=info msg="StartContainer for \"b2d32b8e4f438167fb6f524d8fa953cac2950873bb2211f9bd393826caf9ca20\"" Nov 13 08:34:40.105294 containerd[1618]: time="2024-11-13T08:34:40.105231040Z" level=info msg="StartContainer for \"b2d32b8e4f438167fb6f524d8fa953cac2950873bb2211f9bd393826caf9ca20\" returns successfully" Nov 13 08:34:40.177405 containerd[1618]: time="2024-11-13T08:34:40.177317938Z" level=info msg="shim disconnected" id=b2d32b8e4f438167fb6f524d8fa953cac2950873bb2211f9bd393826caf9ca20 namespace=k8s.io Nov 13 08:34:40.177405 containerd[1618]: time="2024-11-13T08:34:40.177401950Z" level=warning msg="cleaning up after shim disconnected" id=b2d32b8e4f438167fb6f524d8fa953cac2950873bb2211f9bd393826caf9ca20 namespace=k8s.io Nov 13 08:34:40.177405 containerd[1618]: time="2024-11-13T08:34:40.177412337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:40.278220 kubelet[2839]: E1113 08:34:40.277971 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:40.285538 containerd[1618]: time="2024-11-13T08:34:40.285031145Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:34:40.312616 containerd[1618]: time="2024-11-13T08:34:40.312362209Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6304a0acfae38dd9ff5db3a0f2b44158d163ab57b6e9acec3f575484fe541f26\"" Nov 13 08:34:40.314330 containerd[1618]: time="2024-11-13T08:34:40.314296806Z" level=info msg="StartContainer for \"6304a0acfae38dd9ff5db3a0f2b44158d163ab57b6e9acec3f575484fe541f26\"" Nov 13 08:34:40.404286 containerd[1618]: time="2024-11-13T08:34:40.404209458Z" level=info msg="StartContainer for \"6304a0acfae38dd9ff5db3a0f2b44158d163ab57b6e9acec3f575484fe541f26\" returns successfully" Nov 13 08:34:40.446843 containerd[1618]: time="2024-11-13T08:34:40.446746030Z" level=info msg="shim disconnected" id=6304a0acfae38dd9ff5db3a0f2b44158d163ab57b6e9acec3f575484fe541f26 namespace=k8s.io Nov 13 08:34:40.446843 containerd[1618]: time="2024-11-13T08:34:40.446834452Z" level=warning msg="cleaning up after shim disconnected" id=6304a0acfae38dd9ff5db3a0f2b44158d163ab57b6e9acec3f575484fe541f26 namespace=k8s.io Nov 13 08:34:40.446843 containerd[1618]: time="2024-11-13T08:34:40.446851379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:40.664190 kubelet[2839]: E1113 08:34:40.663809 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-ktxpp" podUID="34a9a34a-cded-4279-b18e-0a8812beb122" Nov 13 08:34:41.139604 kubelet[2839]: I1113 08:34:41.139378 2839 setters.go:568] "Node became not ready" node="ci-4152.0.0-e-2bf6127ade" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-13T08:34:41Z","lastTransitionTime":"2024-11-13T08:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 13 08:34:41.283432 kubelet[2839]: E1113 08:34:41.283399 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:41.290467 containerd[1618]: time="2024-11-13T08:34:41.290406735Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:34:41.321504 containerd[1618]: time="2024-11-13T08:34:41.321419997Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022\"" Nov 13 08:34:41.328037 containerd[1618]: time="2024-11-13T08:34:41.327836129Z" level=info msg="StartContainer for \"d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022\"" Nov 13 08:34:41.437593 containerd[1618]: time="2024-11-13T08:34:41.437270228Z" level=info msg="StartContainer for \"d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022\" returns successfully" Nov 13 08:34:41.479770 containerd[1618]: time="2024-11-13T08:34:41.479402292Z" level=info msg="shim disconnected" id=d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022 namespace=k8s.io Nov 13 08:34:41.479770 containerd[1618]: time="2024-11-13T08:34:41.479484812Z" level=warning msg="cleaning up after shim disconnected" id=d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022 namespace=k8s.io Nov 13 08:34:41.479770 containerd[1618]: time="2024-11-13T08:34:41.479498065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:41.744830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a935d4fdac627456cc488e12c151b8bc374976cd80c866fde977038d97d022-rootfs.mount: Deactivated successfully. Nov 13 08:34:42.289601 kubelet[2839]: E1113 08:34:42.289530 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:42.296643 containerd[1618]: time="2024-11-13T08:34:42.296368111Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:34:42.328522 containerd[1618]: time="2024-11-13T08:34:42.328365292Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135\"" Nov 13 08:34:42.330748 containerd[1618]: time="2024-11-13T08:34:42.329521383Z" level=info msg="StartContainer for \"bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135\"" Nov 13 08:34:42.336102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552661503.mount: Deactivated successfully. Nov 13 08:34:42.434611 containerd[1618]: time="2024-11-13T08:34:42.434564724Z" level=info msg="StartContainer for \"bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135\" returns successfully" Nov 13 08:34:42.465763 containerd[1618]: time="2024-11-13T08:34:42.465680599Z" level=info msg="shim disconnected" id=bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135 namespace=k8s.io Nov 13 08:34:42.466508 containerd[1618]: time="2024-11-13T08:34:42.466142799Z" level=warning msg="cleaning up after shim disconnected" id=bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135 namespace=k8s.io Nov 13 08:34:42.466508 containerd[1618]: time="2024-11-13T08:34:42.466226467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:34:42.664119 kubelet[2839]: E1113 08:34:42.663585 2839 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-ktxpp" podUID="34a9a34a-cded-4279-b18e-0a8812beb122" Nov 13 08:34:42.745343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bce46726282993b81072e1da090c12ab6fb6d1b7fd5eb95a0a42f8b06effb135-rootfs.mount: Deactivated successfully. Nov 13 08:34:43.296582 kubelet[2839]: E1113 08:34:43.295697 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:43.305313 containerd[1618]: time="2024-11-13T08:34:43.305264149Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:34:43.340018 containerd[1618]: time="2024-11-13T08:34:43.339565287Z" level=info msg="CreateContainer within sandbox \"20a59c733c6e9ffa969711cf40096787888b6228a18879eccd0f0007a8252054\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad691d003b8a1177ad07cd99248cd6d984a97a966e2d92b47c4ce80ba4d1481f\"" Nov 13 08:34:43.341485 containerd[1618]: time="2024-11-13T08:34:43.340769874Z" level=info msg="StartContainer for \"ad691d003b8a1177ad07cd99248cd6d984a97a966e2d92b47c4ce80ba4d1481f\"" Nov 13 08:34:43.471034 containerd[1618]: time="2024-11-13T08:34:43.470643191Z" level=info msg="StartContainer for \"ad691d003b8a1177ad07cd99248cd6d984a97a966e2d92b47c4ce80ba4d1481f\" returns successfully" Nov 13 08:34:44.008214 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 13 08:34:44.310005 kubelet[2839]: E1113 08:34:44.309684 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:44.342253 kubelet[2839]: I1113 08:34:44.342193 2839 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vdzx7" podStartSLOduration=5.342126457 podStartE2EDuration="5.342126457s" podCreationTimestamp="2024-11-13 08:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:34:44.341242287 +0000 UTC m=+115.876794370" watchObservedRunningTime="2024-11-13 08:34:44.342126457 +0000 UTC m=+115.877678522" Nov 13 08:34:44.665269 kubelet[2839]: E1113 08:34:44.664049 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:45.665802 kubelet[2839]: E1113 08:34:45.665694 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:45.792004 kubelet[2839]: E1113 08:34:45.787591 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:47.854922 systemd-networkd[1222]: lxc_health: Link UP Nov 13 08:34:47.873965 systemd-networkd[1222]: lxc_health: Gained carrier Nov 13 08:34:48.643110 containerd[1618]: time="2024-11-13T08:34:48.642594466Z" level=info msg="StopPodSandbox for \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\"" Nov 13 08:34:48.643110 containerd[1618]: time="2024-11-13T08:34:48.642767855Z" level=info msg="TearDown network for sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" successfully" Nov 13 08:34:48.643110 containerd[1618]: time="2024-11-13T08:34:48.642785247Z" level=info msg="StopPodSandbox for \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" returns successfully" Nov 13 08:34:48.651201 containerd[1618]: time="2024-11-13T08:34:48.648266291Z" level=info msg="RemovePodSandbox for \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\"" Nov 13 08:34:48.651201 containerd[1618]: time="2024-11-13T08:34:48.648373312Z" level=info msg="Forcibly stopping sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\"" Nov 13 08:34:48.651201 containerd[1618]: time="2024-11-13T08:34:48.649152765Z" level=info msg="TearDown network for sandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" successfully" Nov 13 08:34:48.670386 containerd[1618]: time="2024-11-13T08:34:48.668201501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 08:34:48.670386 containerd[1618]: time="2024-11-13T08:34:48.668299280Z" level=info msg="RemovePodSandbox \"b8d82a106bb71d0174e9c112587991bad3cf6ee730238ecb7f5bd7d2c6f04ab9\" returns successfully" Nov 13 08:34:48.675076 containerd[1618]: time="2024-11-13T08:34:48.672034692Z" level=info msg="StopPodSandbox for \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\"" Nov 13 08:34:48.675076 containerd[1618]: time="2024-11-13T08:34:48.672179349Z" level=info msg="TearDown network for sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" successfully" Nov 13 08:34:48.675076 containerd[1618]: time="2024-11-13T08:34:48.672196412Z" level=info msg="StopPodSandbox for \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" returns successfully" Nov 13 08:34:48.680666 containerd[1618]: time="2024-11-13T08:34:48.679466921Z" level=info msg="RemovePodSandbox for \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\"" Nov 13 08:34:48.680666 containerd[1618]: time="2024-11-13T08:34:48.679585853Z" level=info msg="Forcibly stopping sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\"" Nov 13 08:34:48.684122 containerd[1618]: time="2024-11-13T08:34:48.679763094Z" level=info msg="TearDown network for sandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" successfully" Nov 13 08:34:48.702200 containerd[1618]: time="2024-11-13T08:34:48.701526467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 08:34:48.702200 containerd[1618]: time="2024-11-13T08:34:48.701650922Z" level=info msg="RemovePodSandbox \"3cbc6269ddf5b398bfcdb1acb299a36b0907b1c86a50e3844fe40f7a68778113\" returns successfully" Nov 13 08:34:49.791002 kubelet[2839]: E1113 08:34:49.790082 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:49.826229 systemd-networkd[1222]: lxc_health: Gained IPv6LL Nov 13 08:34:50.339655 kubelet[2839]: E1113 08:34:50.339596 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 13 08:34:53.636035 sshd[4665]: Connection closed by 139.178.89.65 port 57780 Nov 13 08:34:53.638230 sshd-session[4657]: pam_unix(sshd:session): session closed for user core Nov 13 08:34:53.647867 systemd[1]: sshd@27-159.223.193.8:22-139.178.89.65:57780.service: Deactivated successfully. Nov 13 08:34:53.669385 systemd-logind[1597]: Session 28 logged out. Waiting for processes to exit. Nov 13 08:34:53.671846 systemd[1]: session-28.scope: Deactivated successfully. Nov 13 08:34:53.676945 systemd-logind[1597]: Removed session 28.