Oct 9 01:07:44.892327 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:07:44.892348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:07:44.892356 kernel: BIOS-provided physical RAM map: Oct 9 01:07:44.892362 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 01:07:44.892367 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 01:07:44.892372 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 01:07:44.892378 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 9 01:07:44.892383 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 9 01:07:44.892390 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:07:44.892395 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 01:07:44.892400 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 01:07:44.892405 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 01:07:44.892410 kernel: NX (Execute Disable) protection: active Oct 9 01:07:44.892416 kernel: APIC: Static calls initialized Oct 9 01:07:44.892424 kernel: SMBIOS 2.8 present. Oct 9 01:07:44.892430 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 9 01:07:44.892435 kernel: Hypervisor detected: KVM Oct 9 01:07:44.892441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:07:44.892446 kernel: kvm-clock: using sched offset of 2938840488 cycles Oct 9 01:07:44.892452 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:07:44.892458 kernel: tsc: Detected 2445.404 MHz processor Oct 9 01:07:44.892464 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:07:44.892470 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:07:44.892477 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 9 01:07:44.892483 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 01:07:44.892489 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:07:44.892494 kernel: Using GB pages for direct mapping Oct 9 01:07:44.892500 kernel: ACPI: Early table checksum verification disabled Oct 9 01:07:44.892505 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 9 01:07:44.892511 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892516 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892522 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892530 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 9 01:07:44.892535 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892541 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892546 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892552 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:44.892558 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 9 01:07:44.892563 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 9 01:07:44.892569 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 9 01:07:44.892580 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 9 01:07:44.892586 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 9 01:07:44.892592 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 9 01:07:44.892597 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 9 01:07:44.892603 kernel: No NUMA configuration found Oct 9 01:07:44.892609 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 9 01:07:44.892617 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 9 01:07:44.892623 kernel: Zone ranges: Oct 9 01:07:44.892629 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:07:44.892635 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 9 01:07:44.892641 kernel: Normal empty Oct 9 01:07:44.892647 kernel: Movable zone start for each node Oct 9 01:07:44.892653 kernel: Early memory node ranges Oct 9 01:07:44.892658 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 01:07:44.892664 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 9 01:07:44.892670 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 9 01:07:44.892678 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:07:44.892684 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 01:07:44.892689 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 01:07:44.892695 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:07:44.892701 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:07:44.892707 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:07:44.892713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:07:44.892722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:07:44.892733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:07:44.892747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:07:44.892757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:07:44.892768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:07:44.892778 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:07:44.892789 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 01:07:44.892799 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:07:44.892810 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 01:07:44.892820 kernel: Booting paravirtualized kernel on KVM Oct 9 01:07:44.892831 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:07:44.892845 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 01:07:44.892883 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 01:07:44.892893 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 01:07:44.892903 kernel: pcpu-alloc: [0] 0 1 Oct 9 01:07:44.892913 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 01:07:44.892925 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:07:44.892936 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:07:44.892946 kernel: random: crng init done Oct 9 01:07:44.892961 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:07:44.892971 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 01:07:44.892982 kernel: Fallback order for Node 0: 0 Oct 9 01:07:44.892992 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 9 01:07:44.893003 kernel: Policy zone: DMA32 Oct 9 01:07:44.893013 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:07:44.893044 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125148K reserved, 0K cma-reserved) Oct 9 01:07:44.893056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 01:07:44.893064 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:07:44.893073 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:07:44.893079 kernel: Dynamic Preempt: voluntary Oct 9 01:07:44.893085 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:07:44.893091 kernel: rcu: RCU event tracing is enabled. Oct 9 01:07:44.893097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 01:07:44.893104 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:07:44.893109 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:07:44.893115 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:07:44.893121 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:07:44.893129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 01:07:44.893145 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 01:07:44.893151 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:07:44.893157 kernel: Console: colour VGA+ 80x25 Oct 9 01:07:44.893163 kernel: printk: console [tty0] enabled Oct 9 01:07:44.893168 kernel: printk: console [ttyS0] enabled Oct 9 01:07:44.893174 kernel: ACPI: Core revision 20230628 Oct 9 01:07:44.893180 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:07:44.893186 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:07:44.893194 kernel: x2apic enabled Oct 9 01:07:44.893200 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:07:44.893206 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:07:44.893212 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 01:07:44.893217 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Oct 9 01:07:44.893223 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 01:07:44.893229 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 01:07:44.893235 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 01:07:44.893241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:07:44.893256 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:07:44.893262 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:07:44.893268 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:07:44.893276 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 01:07:44.893282 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 01:07:44.893307 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:07:44.893314 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:07:44.893320 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 01:07:44.893327 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 01:07:44.893333 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 01:07:44.893339 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:07:44.893348 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:07:44.893354 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:07:44.893360 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:07:44.893366 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 01:07:44.893373 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:07:44.893383 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:07:44.893395 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:07:44.893407 kernel: landlock: Up and running. Oct 9 01:07:44.893419 kernel: SELinux: Initializing. Oct 9 01:07:44.893430 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:07:44.893442 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:07:44.893453 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 01:07:44.893465 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:07:44.893477 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:07:44.893494 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:07:44.893506 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 01:07:44.893517 kernel: ... version: 0 Oct 9 01:07:44.893529 kernel: ... bit width: 48 Oct 9 01:07:44.893540 kernel: ... generic registers: 6 Oct 9 01:07:44.893551 kernel: ... value mask: 0000ffffffffffff Oct 9 01:07:44.893562 kernel: ... max period: 00007fffffffffff Oct 9 01:07:44.893573 kernel: ... fixed-purpose events: 0 Oct 9 01:07:44.893583 kernel: ... event mask: 000000000000003f Oct 9 01:07:44.893598 kernel: signal: max sigframe size: 1776 Oct 9 01:07:44.893642 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:07:44.893654 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:07:44.893690 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:07:44.893701 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:07:44.893713 kernel: .... node #0, CPUs: #1 Oct 9 01:07:44.893723 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 01:07:44.893734 kernel: smpboot: Max logical packages: 1 Oct 9 01:07:44.893745 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Oct 9 01:07:44.893761 kernel: devtmpfs: initialized Oct 9 01:07:44.893772 kernel: x86/mm: Memory block size: 128MB Oct 9 01:07:44.893784 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:07:44.893796 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 01:07:44.893807 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:07:44.893818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:07:44.893830 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:07:44.893842 kernel: audit: type=2000 audit(1728436063.157:1): state=initialized audit_enabled=0 res=1 Oct 9 01:07:44.893855 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:07:44.893871 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:07:44.893882 kernel: cpuidle: using governor menu Oct 9 01:07:44.893893 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:07:44.893905 kernel: dca service started, version 1.12.1 Oct 9 01:07:44.893916 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 01:07:44.893927 kernel: PCI: Using configuration type 1 for base access Oct 9 01:07:44.893938 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:07:44.893950 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:07:44.893961 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:07:44.893978 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:07:44.893989 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:07:44.894001 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:07:44.894012 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:07:44.894041 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:07:44.894053 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:07:44.894064 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:07:44.894075 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:07:44.894085 kernel: ACPI: Interpreter enabled Oct 9 01:07:44.894101 kernel: ACPI: PM: (supports S0 S5) Oct 9 01:07:44.894112 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:07:44.894123 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:07:44.894133 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:07:44.894144 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 01:07:44.894155 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:07:44.894420 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:07:44.894606 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 01:07:44.894848 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 01:07:44.894867 kernel: PCI host bridge to bus 0000:00 Oct 9 01:07:44.895090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:07:44.895285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:07:44.895406 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:07:44.895505 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 9 01:07:44.895601 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 01:07:44.895785 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 01:07:44.895930 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:07:44.896159 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 01:07:44.896348 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 9 01:07:44.896497 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 9 01:07:44.896606 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 9 01:07:44.896717 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 9 01:07:44.896821 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 9 01:07:44.896926 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:07:44.897625 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.897777 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 9 01:07:44.899149 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.899292 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 9 01:07:44.899413 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.899542 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 9 01:07:44.899656 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.899761 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 9 01:07:44.899872 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.899977 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 9 01:07:44.901241 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.901355 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 9 01:07:44.901467 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.901602 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 9 01:07:44.901720 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.901827 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 9 01:07:44.901974 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:07:44.902106 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 9 01:07:44.902219 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 01:07:44.902323 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 01:07:44.902434 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 01:07:44.902537 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 9 01:07:44.902645 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 9 01:07:44.902771 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 01:07:44.902876 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 01:07:44.902991 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:07:44.905196 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 9 01:07:44.905348 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 9 01:07:44.905504 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 9 01:07:44.905624 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 01:07:44.905735 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 01:07:44.905889 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:07:44.906018 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 9 01:07:44.908181 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 9 01:07:44.908298 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 01:07:44.908412 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 01:07:44.908517 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:07:44.908635 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 9 01:07:44.908746 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 9 01:07:44.908855 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 9 01:07:44.908959 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 01:07:44.909105 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 01:07:44.909224 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:07:44.909347 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 9 01:07:44.909458 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 9 01:07:44.909563 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 01:07:44.909666 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 01:07:44.909770 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:07:44.909888 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 9 01:07:44.910012 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 9 01:07:44.911182 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 01:07:44.911291 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 01:07:44.911396 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:07:44.911514 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 9 01:07:44.911625 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 9 01:07:44.911734 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 9 01:07:44.911838 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 01:07:44.911949 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 01:07:44.913111 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:07:44.913124 kernel: acpiphp: Slot [0] registered Oct 9 01:07:44.913258 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:07:44.913375 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 9 01:07:44.913487 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 9 01:07:44.913596 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 9 01:07:44.913708 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 01:07:44.913813 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 01:07:44.913918 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:07:44.913926 kernel: acpiphp: Slot [0-2] registered Oct 9 01:07:44.914051 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 01:07:44.914164 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 01:07:44.914269 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:07:44.914278 kernel: acpiphp: Slot [0-3] registered Oct 9 01:07:44.914380 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 01:07:44.914489 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 01:07:44.914593 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:07:44.914601 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:07:44.914608 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:07:44.914615 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:07:44.914621 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:07:44.914627 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 01:07:44.914633 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 01:07:44.914643 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 01:07:44.914649 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 01:07:44.914655 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 01:07:44.914661 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 01:07:44.914667 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 01:07:44.914673 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 01:07:44.914694 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 01:07:44.914705 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 01:07:44.914715 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 01:07:44.914728 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 01:07:44.914738 kernel: iommu: Default domain type: Translated Oct 9 01:07:44.914744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:07:44.914750 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:07:44.914756 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:07:44.914763 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 01:07:44.914769 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 9 01:07:44.914882 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 01:07:44.914988 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 01:07:44.915444 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:07:44.915456 kernel: vgaarb: loaded Oct 9 01:07:44.915463 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:07:44.915470 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:07:44.915476 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:07:44.915482 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:07:44.915489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:07:44.915495 kernel: pnp: PnP ACPI init Oct 9 01:07:44.915612 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 01:07:44.915634 kernel: pnp: PnP ACPI: found 5 devices Oct 9 01:07:44.915641 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:07:44.915648 kernel: NET: Registered PF_INET protocol family Oct 9 01:07:44.915654 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:07:44.915661 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 01:07:44.915667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:07:44.915673 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 01:07:44.915680 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 01:07:44.915688 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 01:07:44.915695 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:07:44.915701 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:07:44.915707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:07:44.915714 kernel: NET: Registered PF_XDP protocol family Oct 9 01:07:44.915824 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 9 01:07:44.915929 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 9 01:07:44.916050 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 9 01:07:44.918148 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 9 01:07:44.918260 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 9 01:07:44.918371 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 9 01:07:44.918486 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 01:07:44.918595 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 01:07:44.918720 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:07:44.918829 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 01:07:44.918939 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 01:07:44.921101 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:07:44.921217 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 01:07:44.921322 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 01:07:44.921426 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:07:44.921542 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 01:07:44.921646 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 01:07:44.921754 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:07:44.921864 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 01:07:44.921984 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 01:07:44.922117 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:07:44.922221 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 01:07:44.922325 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 01:07:44.922427 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:07:44.922544 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 01:07:44.922649 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 9 01:07:44.922798 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 01:07:44.922904 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:07:44.923013 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 01:07:44.925160 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 9 01:07:44.925267 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 01:07:44.925372 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:07:44.925482 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 01:07:44.925587 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 9 01:07:44.925691 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 01:07:44.925800 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:07:44.925901 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:07:44.925997 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:07:44.926155 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:07:44.926263 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 9 01:07:44.926360 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 01:07:44.926454 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 01:07:44.926563 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 9 01:07:44.926664 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:07:44.926793 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 9 01:07:44.926913 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:07:44.929056 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 9 01:07:44.929170 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:07:44.929291 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 9 01:07:44.929394 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:07:44.929500 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 9 01:07:44.929606 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:07:44.929713 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 9 01:07:44.929814 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:07:44.929920 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 9 01:07:44.930035 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 9 01:07:44.930149 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:07:44.930287 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 9 01:07:44.930411 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 9 01:07:44.930511 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:07:44.930622 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 9 01:07:44.930737 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 9 01:07:44.930837 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:07:44.930847 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 01:07:44.930854 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:07:44.930864 kernel: Initialise system trusted keyrings Oct 9 01:07:44.930873 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 01:07:44.930880 kernel: Key type asymmetric registered Oct 9 01:07:44.930887 kernel: Asymmetric key parser 'x509' registered Oct 9 01:07:44.930894 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:07:44.930900 kernel: io scheduler mq-deadline registered Oct 9 01:07:44.930907 kernel: io scheduler kyber registered Oct 9 01:07:44.930913 kernel: io scheduler bfq registered Oct 9 01:07:44.931083 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 9 01:07:44.931199 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 9 01:07:44.931304 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 9 01:07:44.931408 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 9 01:07:44.931511 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 9 01:07:44.931613 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 9 01:07:44.931731 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 9 01:07:44.931837 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 9 01:07:44.931942 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 9 01:07:44.932086 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 9 01:07:44.932199 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 9 01:07:44.932305 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 9 01:07:44.932422 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 9 01:07:44.932529 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 9 01:07:44.932635 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 9 01:07:44.932740 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 9 01:07:44.932749 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 01:07:44.932858 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 9 01:07:44.932961 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 9 01:07:44.932971 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:07:44.932978 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 9 01:07:44.932984 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:07:44.932991 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:07:44.932997 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:07:44.933004 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:07:44.933010 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:07:44.933216 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 01:07:44.933231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:07:44.933335 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 01:07:44.933433 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T01:07:44 UTC (1728436064) Oct 9 01:07:44.933530 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 01:07:44.933539 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 01:07:44.933546 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:07:44.933553 kernel: Segment Routing with IPv6 Oct 9 01:07:44.933564 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:07:44.933571 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:07:44.933578 kernel: Key type dns_resolver registered Oct 9 01:07:44.933585 kernel: IPI shorthand broadcast: enabled Oct 9 01:07:44.933591 kernel: sched_clock: Marking stable (1091006579, 144197372)->(1243608079, -8404128) Oct 9 01:07:44.933598 kernel: registered taskstats version 1 Oct 9 01:07:44.933605 kernel: Loading compiled-in X.509 certificates Oct 9 01:07:44.933612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:07:44.933618 kernel: Key type .fscrypt registered Oct 9 01:07:44.933627 kernel: Key type fscrypt-provisioning registered Oct 9 01:07:44.933634 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:07:44.933640 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:07:44.933647 kernel: ima: No architecture policies found Oct 9 01:07:44.933654 kernel: clk: Disabling unused clocks Oct 9 01:07:44.933660 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:07:44.933667 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:07:44.933674 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:07:44.933680 kernel: Run /init as init process Oct 9 01:07:44.933690 kernel: with arguments: Oct 9 01:07:44.933697 kernel: /init Oct 9 01:07:44.933703 kernel: with environment: Oct 9 01:07:44.933710 kernel: HOME=/ Oct 9 01:07:44.933716 kernel: TERM=linux Oct 9 01:07:44.933723 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:07:44.933731 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:07:44.933740 systemd[1]: Detected virtualization kvm. Oct 9 01:07:44.933750 systemd[1]: Detected architecture x86-64. Oct 9 01:07:44.933756 systemd[1]: Running in initrd. Oct 9 01:07:44.933763 systemd[1]: No hostname configured, using default hostname. Oct 9 01:07:44.933770 systemd[1]: Hostname set to . Oct 9 01:07:44.933777 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:07:44.933784 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:07:44.933791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:44.933798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:44.933808 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:07:44.933815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:07:44.933822 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:07:44.933829 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:07:44.933837 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:07:44.933845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:07:44.933854 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:44.933861 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:44.933868 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:07:44.933875 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:07:44.933882 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:07:44.933889 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:07:44.933896 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:07:44.933903 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:07:44.933910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:07:44.933919 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:07:44.933926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:44.933936 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:44.933949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:44.933959 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:07:44.933966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:07:44.933973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:07:44.933980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:07:44.933989 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:07:44.933998 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:07:44.934005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:07:44.934012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:44.934062 systemd-journald[187]: Collecting audit messages is disabled. Oct 9 01:07:44.934085 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:07:44.934092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:44.934100 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:07:44.934107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:07:44.934117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:07:44.934124 kernel: Bridge firewalling registered Oct 9 01:07:44.934133 systemd-journald[187]: Journal started Oct 9 01:07:44.934148 systemd-journald[187]: Runtime Journal (/run/log/journal/1f5c2480fd4e4a50ad98ada5609f2346) is 4.8M, max 38.4M, 33.6M free. Oct 9 01:07:44.898398 systemd-modules-load[188]: Inserted module 'overlay' Oct 9 01:07:44.930324 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 9 01:07:44.971042 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:07:44.971302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:44.972651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:44.973912 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:07:44.980136 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:44.982152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:07:44.984177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:07:44.991174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:07:45.003779 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:45.005078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:45.005656 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:45.007384 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:45.012147 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:07:45.016650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:07:45.027043 dracut-cmdline[220]: dracut-dracut-053 Oct 9 01:07:45.027607 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:07:45.048409 systemd-resolved[222]: Positive Trust Anchors: Oct 9 01:07:45.049105 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:07:45.049133 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:07:45.055335 systemd-resolved[222]: Defaulting to hostname 'linux'. Oct 9 01:07:45.056330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:07:45.056892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:45.091067 kernel: SCSI subsystem initialized Oct 9 01:07:45.100044 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:07:45.109091 kernel: iscsi: registered transport (tcp) Oct 9 01:07:45.127058 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:07:45.127096 kernel: QLogic iSCSI HBA Driver Oct 9 01:07:45.164125 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:07:45.169147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:07:45.191649 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:07:45.191688 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:07:45.192171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:07:45.230047 kernel: raid6: avx2x4 gen() 37208 MB/s Oct 9 01:07:45.247052 kernel: raid6: avx2x2 gen() 33174 MB/s Oct 9 01:07:45.264152 kernel: raid6: avx2x1 gen() 28064 MB/s Oct 9 01:07:45.264187 kernel: raid6: using algorithm avx2x4 gen() 37208 MB/s Oct 9 01:07:45.282241 kernel: raid6: .... xor() 4731 MB/s, rmw enabled Oct 9 01:07:45.282264 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:07:45.301053 kernel: xor: automatically using best checksumming function avx Oct 9 01:07:45.425068 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:07:45.435761 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:07:45.441158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:45.458131 systemd-udevd[405]: Using default interface naming scheme 'v255'. Oct 9 01:07:45.461980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:45.470247 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:07:45.482653 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Oct 9 01:07:45.510345 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:07:45.515187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:07:45.586419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:45.594241 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:07:45.606769 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:07:45.608619 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:07:45.609980 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:45.610520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:07:45.619216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:07:45.632359 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:07:45.700270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:07:45.707635 kernel: scsi host0: Virtio SCSI HBA Oct 9 01:07:45.700386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:45.714728 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:07:45.714932 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 9 01:07:45.711951 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:45.712886 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:07:45.713013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:45.714131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:45.725080 kernel: libata version 3.00 loaded. Oct 9 01:07:45.728259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:45.741390 kernel: ACPI: bus type USB registered Oct 9 01:07:45.741432 kernel: usbcore: registered new interface driver usbfs Oct 9 01:07:45.741443 kernel: usbcore: registered new interface driver hub Oct 9 01:07:45.744050 kernel: usbcore: registered new device driver usb Oct 9 01:07:45.758051 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:07:45.758099 kernel: AES CTR mode by8 optimization enabled Oct 9 01:07:45.794047 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 01:07:45.794255 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 01:07:45.796257 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 01:07:45.796437 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 01:07:45.799456 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:07:45.799638 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 9 01:07:45.799817 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 9 01:07:45.805133 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 9 01:07:45.805845 kernel: scsi host1: ahci Oct 9 01:07:45.806868 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 9 01:07:45.807418 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 9 01:07:45.809142 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 9 01:07:45.809283 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 9 01:07:45.809456 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:07:45.809631 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 9 01:07:45.809763 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 9 01:07:45.809889 kernel: hub 1-0:1.0: USB hub found Oct 9 01:07:45.810765 kernel: hub 1-0:1.0: 4 ports detected Oct 9 01:07:45.810907 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 9 01:07:45.811588 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:07:45.811606 kernel: GPT:17805311 != 80003071 Oct 9 01:07:45.811615 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:07:45.811623 kernel: GPT:17805311 != 80003071 Oct 9 01:07:45.811631 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:07:45.811639 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:07:45.811647 kernel: hub 2-0:1.0: USB hub found Oct 9 01:07:45.811794 kernel: hub 2-0:1.0: 4 ports detected Oct 9 01:07:45.811922 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 9 01:07:45.812991 kernel: scsi host2: ahci Oct 9 01:07:45.813310 kernel: scsi host3: ahci Oct 9 01:07:45.813726 kernel: scsi host4: ahci Oct 9 01:07:45.815057 kernel: scsi host5: ahci Oct 9 01:07:45.815208 kernel: scsi host6: ahci Oct 9 01:07:45.815341 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 9 01:07:45.815352 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 9 01:07:45.815365 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 9 01:07:45.815396 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 9 01:07:45.815405 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 9 01:07:45.815413 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 9 01:07:45.873971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 9 01:07:45.881495 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (464) Oct 9 01:07:45.881516 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (457) Oct 9 01:07:45.882582 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 9 01:07:45.883337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:45.891427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:45.896918 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 9 01:07:45.901762 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 9 01:07:45.908856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:07:45.917187 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:07:45.918666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:45.921976 disk-uuid[576]: Primary Header is updated. Oct 9 01:07:45.921976 disk-uuid[576]: Secondary Entries is updated. Oct 9 01:07:45.921976 disk-uuid[576]: Secondary Header is updated. Oct 9 01:07:45.930044 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:07:45.936049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:07:45.945057 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:07:46.053160 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 9 01:07:46.136051 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 01:07:46.136107 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 01:07:46.136124 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 9 01:07:46.136133 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 01:07:46.136141 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 01:07:46.139808 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 01:07:46.139848 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 01:07:46.139863 kernel: ata1.00: applying bridge limits Oct 9 01:07:46.140969 kernel: ata1.00: configured for UDMA/100 Oct 9 01:07:46.143145 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:07:46.180792 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 01:07:46.181094 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:07:46.190053 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:07:46.194052 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:07:46.199107 kernel: usbcore: registered new interface driver usbhid Oct 9 01:07:46.199142 kernel: usbhid: USB HID core driver Oct 9 01:07:46.204333 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Oct 9 01:07:46.204365 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 9 01:07:46.944708 disk-uuid[577]: The operation has completed successfully. Oct 9 01:07:46.945477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:07:46.995225 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:07:46.995383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:07:47.020211 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:07:47.025121 sh[599]: Success Oct 9 01:07:47.040239 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 01:07:47.088795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:07:47.097112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:07:47.097771 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:07:47.115644 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:07:47.115705 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:07:47.115725 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:07:47.118537 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:07:47.118570 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:07:47.127052 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 01:07:47.128819 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:07:47.129818 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:07:47.134142 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:07:47.136205 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:07:47.150321 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:07:47.150365 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:07:47.150375 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:07:47.155459 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:07:47.155485 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:07:47.171047 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:07:47.171350 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:07:47.176684 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:07:47.184181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:07:47.241198 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:07:47.250719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:07:47.257365 ignition[703]: Ignition 2.19.0 Oct 9 01:07:47.257945 ignition[703]: Stage: fetch-offline Oct 9 01:07:47.257983 ignition[703]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:47.257993 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:47.258236 ignition[703]: parsed url from cmdline: "" Oct 9 01:07:47.261622 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:07:47.258240 ignition[703]: no config URL provided Oct 9 01:07:47.258245 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:07:47.258254 ignition[703]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:07:47.258259 ignition[703]: failed to fetch config: resource requires networking Oct 9 01:07:47.258405 ignition[703]: Ignition finished successfully Oct 9 01:07:47.272733 systemd-networkd[784]: lo: Link UP Oct 9 01:07:47.272743 systemd-networkd[784]: lo: Gained carrier Oct 9 01:07:47.275148 systemd-networkd[784]: Enumeration completed Oct 9 01:07:47.275344 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:07:47.276181 systemd[1]: Reached target network.target - Network. Oct 9 01:07:47.276272 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:47.276276 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:47.277132 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:47.277137 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:47.277680 systemd-networkd[784]: eth0: Link UP Oct 9 01:07:47.277683 systemd-networkd[784]: eth0: Gained carrier Oct 9 01:07:47.277690 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:47.281381 systemd-networkd[784]: eth1: Link UP Oct 9 01:07:47.281385 systemd-networkd[784]: eth1: Gained carrier Oct 9 01:07:47.281392 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:47.284178 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 01:07:47.295987 ignition[788]: Ignition 2.19.0 Oct 9 01:07:47.295997 ignition[788]: Stage: fetch Oct 9 01:07:47.296865 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:47.296876 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:47.296956 ignition[788]: parsed url from cmdline: "" Oct 9 01:07:47.296960 ignition[788]: no config URL provided Oct 9 01:07:47.296965 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:07:47.296973 ignition[788]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:07:47.296992 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 9 01:07:47.297167 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 9 01:07:47.329086 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:07:47.388079 systemd-networkd[784]: eth0: DHCPv4 address 188.245.175.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:07:47.497792 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 9 01:07:47.502361 ignition[788]: GET result: OK Oct 9 01:07:47.502439 ignition[788]: parsing config with SHA512: 85c24bd2e6a50a1e9e36fff1f9b95ed3183d8a8316288525ad9663601cb51d538e04fb50d2bb3a6301b207e7c58af5fb6b7afe60217a1a2113c6ef69bf4088f7 Oct 9 01:07:47.505760 unknown[788]: fetched base config from "system" Oct 9 01:07:47.505774 unknown[788]: fetched base config from "system" Oct 9 01:07:47.506004 ignition[788]: fetch: fetch complete Oct 9 01:07:47.505780 unknown[788]: fetched user config from "hetzner" Oct 9 01:07:47.506008 ignition[788]: fetch: fetch passed Oct 9 01:07:47.506064 ignition[788]: Ignition finished successfully Oct 9 01:07:47.509347 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 01:07:47.515158 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:07:47.529621 ignition[795]: Ignition 2.19.0 Oct 9 01:07:47.529633 ignition[795]: Stage: kargs Oct 9 01:07:47.529801 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:47.529813 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:47.530722 ignition[795]: kargs: kargs passed Oct 9 01:07:47.532186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:07:47.530770 ignition[795]: Ignition finished successfully Oct 9 01:07:47.539279 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:07:47.550465 ignition[802]: Ignition 2.19.0 Oct 9 01:07:47.551129 ignition[802]: Stage: disks Oct 9 01:07:47.551267 ignition[802]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:47.551278 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:47.552735 ignition[802]: disks: disks passed Oct 9 01:07:47.554076 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:07:47.552807 ignition[802]: Ignition finished successfully Oct 9 01:07:47.555129 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:07:47.555647 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:07:47.556618 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:07:47.557681 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:07:47.558836 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:07:47.567201 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:07:47.581504 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 01:07:47.583608 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:07:47.589139 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:07:47.666055 kernel: EXT4-fs (sda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:07:47.666291 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:07:47.667220 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:07:47.673077 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:07:47.675126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:07:47.677300 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 01:07:47.679091 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:07:47.680136 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:07:47.686076 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Oct 9 01:07:47.687276 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:07:47.696103 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:07:47.696125 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:07:47.696135 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:07:47.696144 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:07:47.696152 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:07:47.697831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:07:47.704895 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:07:47.740264 coreos-metadata[820]: Oct 09 01:07:47.740 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 9 01:07:47.742290 coreos-metadata[820]: Oct 09 01:07:47.741 INFO Fetch successful Oct 9 01:07:47.743113 coreos-metadata[820]: Oct 09 01:07:47.743 INFO wrote hostname ci-4116-0-0-2-50096a0261 to /sysroot/etc/hostname Oct 9 01:07:47.744694 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:07:47.746255 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:07:47.750519 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:07:47.755067 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:07:47.759005 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:07:47.842370 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:07:47.849164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:07:47.851233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:07:47.859049 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:07:47.880949 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:07:47.881936 ignition[939]: INFO : Ignition 2.19.0 Oct 9 01:07:47.881936 ignition[939]: INFO : Stage: mount Oct 9 01:07:47.881936 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:47.881936 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:47.884546 ignition[939]: INFO : mount: mount passed Oct 9 01:07:47.884546 ignition[939]: INFO : Ignition finished successfully Oct 9 01:07:47.883877 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:07:47.889118 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:07:48.113511 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:07:48.118176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:07:48.131556 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (950) Oct 9 01:07:48.131589 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:07:48.134195 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:07:48.136537 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:07:48.143065 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:07:48.143089 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:07:48.145824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:07:48.164922 ignition[967]: INFO : Ignition 2.19.0 Oct 9 01:07:48.164922 ignition[967]: INFO : Stage: files Oct 9 01:07:48.166534 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:48.166534 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:48.166534 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:07:48.169218 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:07:48.169218 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:07:48.171086 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:07:48.171086 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:07:48.173070 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:07:48.173070 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:07:48.173070 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:07:48.171114 unknown[967]: wrote ssh authorized keys file for user: core Oct 9 01:07:48.246062 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:07:48.425202 systemd-networkd[784]: eth1: Gained IPv6LL Oct 9 01:07:48.428987 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:07:48.430217 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:07:48.438158 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 01:07:48.681276 systemd-networkd[784]: eth0: Gained IPv6LL Oct 9 01:07:48.984012 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:07:49.250925 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:07:49.250925 ignition[967]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:07:49.254240 ignition[967]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:07:49.254240 ignition[967]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:07:49.254240 ignition[967]: INFO : files: files passed Oct 9 01:07:49.254240 ignition[967]: INFO : Ignition finished successfully Oct 9 01:07:49.254868 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:07:49.262158 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:07:49.266108 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:07:49.268988 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:07:49.269129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:07:49.279076 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:49.279076 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:49.281097 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:49.283581 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:07:49.284247 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:07:49.291196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:07:49.311865 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:07:49.311986 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:07:49.313593 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:07:49.314239 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:07:49.315269 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:07:49.321210 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:07:49.332945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:07:49.337236 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:07:49.346793 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:49.347965 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:49.349130 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:07:49.349637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:07:49.349733 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:07:49.350986 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:07:49.351709 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:07:49.352729 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:07:49.353682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:07:49.354715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:07:49.355730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:07:49.356836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:07:49.357885 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:07:49.358931 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:07:49.359954 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:07:49.360910 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:07:49.361004 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:07:49.362172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:49.362852 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:49.363757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:07:49.364110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:49.364983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:07:49.365100 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:07:49.366673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:07:49.366796 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:07:49.367450 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:07:49.367585 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:07:49.368433 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 01:07:49.368529 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:07:49.380546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:07:49.383195 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:07:49.383658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:07:49.383800 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:49.385799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:07:49.385930 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:07:49.392762 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:07:49.392875 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:07:49.399372 ignition[1020]: INFO : Ignition 2.19.0 Oct 9 01:07:49.400691 ignition[1020]: INFO : Stage: umount Oct 9 01:07:49.403077 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:49.409411 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:07:49.409411 ignition[1020]: INFO : umount: umount passed Oct 9 01:07:49.409411 ignition[1020]: INFO : Ignition finished successfully Oct 9 01:07:49.408942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:07:49.410907 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:07:49.411080 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:07:49.413448 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:07:49.413512 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:07:49.414227 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:07:49.414281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:07:49.414738 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 01:07:49.414780 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 01:07:49.419662 systemd[1]: Stopped target network.target - Network. Oct 9 01:07:49.421337 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:07:49.421395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:07:49.421864 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:07:49.422752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:07:49.423527 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:49.424157 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:07:49.424543 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:07:49.424958 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:07:49.424997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:07:49.426519 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:07:49.426556 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:07:49.433618 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:07:49.433721 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:07:49.434647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:07:49.434717 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:07:49.435978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:07:49.436987 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:07:49.438105 systemd-networkd[784]: eth0: DHCPv6 lease lost Oct 9 01:07:49.438642 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:07:49.438809 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:07:49.440000 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:07:49.440108 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:07:49.443089 systemd-networkd[784]: eth1: DHCPv6 lease lost Oct 9 01:07:49.444865 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:07:49.445250 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:07:49.448310 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:07:49.448471 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:07:49.450456 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:07:49.450536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:49.457222 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:07:49.457785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:07:49.457854 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:07:49.458940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:07:49.459000 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:49.459918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:07:49.459963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:49.461656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:07:49.461701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:49.462267 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:49.473237 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:07:49.473352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:07:49.478796 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:07:49.478970 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:49.480104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:07:49.480154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:49.480954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:07:49.480991 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:49.481954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:07:49.482002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:07:49.483697 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:07:49.483742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:07:49.484783 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:07:49.484826 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:49.491202 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:07:49.492340 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:07:49.492408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:49.492921 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:07:49.492972 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:07:49.493471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:07:49.493515 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:49.493982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:07:49.496319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:49.498424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:07:49.498536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:07:49.500119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:07:49.504089 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:07:49.512350 systemd[1]: Switching root. Oct 9 01:07:49.541322 systemd-journald[187]: Journal stopped Oct 9 01:07:50.535347 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 9 01:07:50.535446 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:07:50.535464 kernel: SELinux: policy capability open_perms=1 Oct 9 01:07:50.535487 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:07:50.535501 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:07:50.535515 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:07:50.535537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:07:50.535551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:07:50.535572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:07:50.535592 kernel: audit: type=1403 audit(1728436069.661:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:07:50.535607 systemd[1]: Successfully loaded SELinux policy in 41.409ms. Oct 9 01:07:50.535634 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.816ms. Oct 9 01:07:50.535650 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:07:50.535665 systemd[1]: Detected virtualization kvm. Oct 9 01:07:50.535680 systemd[1]: Detected architecture x86-64. Oct 9 01:07:50.535695 systemd[1]: Detected first boot. Oct 9 01:07:50.535709 systemd[1]: Hostname set to . Oct 9 01:07:50.535723 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:07:50.535735 zram_generator::config[1062]: No configuration found. Oct 9 01:07:50.535753 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:07:50.535769 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:07:50.535784 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:07:50.535799 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:07:50.535815 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:07:50.535826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:07:50.535835 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:07:50.535845 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:07:50.535855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:07:50.535869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:07:50.535879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:07:50.535888 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:07:50.535898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:50.535908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:50.535919 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:07:50.535930 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:07:50.535940 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:07:50.535952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:07:50.535963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:07:50.535974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:50.535984 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:07:50.535995 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:07:50.536005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:07:50.536015 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:07:50.538432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:50.538449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:07:50.538459 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:07:50.538482 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:07:50.538492 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:07:50.538503 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:07:50.538513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:50.538523 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:50.538533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:50.538547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:07:50.538558 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:07:50.538568 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:07:50.538582 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:07:50.538595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:50.538605 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:07:50.538617 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:07:50.538628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:07:50.538638 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:07:50.538648 systemd[1]: Reached target machines.target - Containers. Oct 9 01:07:50.538659 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:07:50.538669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:50.538679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:07:50.538689 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:07:50.538725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:07:50.538736 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:07:50.538746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:07:50.538756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:07:50.538766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:07:50.538776 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:07:50.538786 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:07:50.538797 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:07:50.538811 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:07:50.538821 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:07:50.538831 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:07:50.538841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:07:50.538851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:07:50.538861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:07:50.538871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:07:50.538882 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:07:50.538892 systemd[1]: Stopped verity-setup.service. Oct 9 01:07:50.538904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:50.538914 kernel: fuse: init (API version 7.39) Oct 9 01:07:50.538925 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:07:50.538934 kernel: loop: module loaded Oct 9 01:07:50.538944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:07:50.538954 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:07:50.538964 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:07:50.538976 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:07:50.538986 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:07:50.538999 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:07:50.539009 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:50.539019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:07:50.541165 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:07:50.541178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:07:50.541193 kernel: ACPI: bus type drm_connector registered Oct 9 01:07:50.541203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:07:50.541214 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:07:50.541224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:07:50.541234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:07:50.541246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:07:50.541278 systemd-journald[1145]: Collecting audit messages is disabled. Oct 9 01:07:50.541303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:07:50.541316 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:07:50.541326 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:07:50.541336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:07:50.541347 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:50.541357 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:07:50.541367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:07:50.541377 systemd-journald[1145]: Journal started Oct 9 01:07:50.541399 systemd-journald[1145]: Runtime Journal (/run/log/journal/1f5c2480fd4e4a50ad98ada5609f2346) is 4.8M, max 38.4M, 33.6M free. Oct 9 01:07:50.195056 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:07:50.218018 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 9 01:07:50.218753 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:07:50.547043 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:07:50.559201 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:07:50.567125 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:07:50.573124 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:07:50.574187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:07:50.574279 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:07:50.575612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:07:50.581382 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:07:50.584188 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:07:50.584739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:50.587185 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:07:50.591236 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:07:50.592086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:07:50.594179 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:07:50.595279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:07:50.598910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:07:50.601693 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:07:50.611201 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:07:50.614546 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:07:50.615248 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:07:50.617060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:07:50.649407 systemd-journald[1145]: Time spent on flushing to /var/log/journal/1f5c2480fd4e4a50ad98ada5609f2346 is 71.486ms for 1136 entries. Oct 9 01:07:50.649407 systemd-journald[1145]: System Journal (/var/log/journal/1f5c2480fd4e4a50ad98ada5609f2346) is 8.0M, max 584.8M, 576.8M free. Oct 9 01:07:50.764205 systemd-journald[1145]: Received client request to flush runtime journal. Oct 9 01:07:50.764249 kernel: loop0: detected capacity change from 0 to 8 Oct 9 01:07:50.764274 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:07:50.764291 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 01:07:50.666456 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:07:50.667241 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:07:50.674720 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:07:50.701822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:50.715536 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:07:50.717100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:50.736885 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Oct 9 01:07:50.736898 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Oct 9 01:07:50.750210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:07:50.753638 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 01:07:50.763195 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:07:50.768559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:07:50.771845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:07:50.772874 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:07:50.798583 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:07:50.803297 kernel: loop2: detected capacity change from 0 to 140992 Oct 9 01:07:50.809170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:07:50.834745 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Oct 9 01:07:50.835165 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Oct 9 01:07:50.845266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:50.863530 kernel: loop3: detected capacity change from 0 to 138192 Oct 9 01:07:50.905059 kernel: loop4: detected capacity change from 0 to 8 Oct 9 01:07:50.908644 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 01:07:50.927062 kernel: loop6: detected capacity change from 0 to 140992 Oct 9 01:07:50.952102 kernel: loop7: detected capacity change from 0 to 138192 Oct 9 01:07:50.968729 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 9 01:07:50.969988 (sd-merge)[1210]: Merged extensions into '/usr'. Oct 9 01:07:50.976471 systemd[1]: Reloading requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:07:50.976597 systemd[1]: Reloading... Oct 9 01:07:51.074086 zram_generator::config[1245]: No configuration found. Oct 9 01:07:51.182479 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:07:51.196462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:07:51.237171 systemd[1]: Reloading finished in 259 ms. Oct 9 01:07:51.260366 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:07:51.266952 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:07:51.273546 systemd[1]: Starting ensure-sysext.service... Oct 9 01:07:51.275522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:07:51.276562 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:07:51.280186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:51.288070 systemd[1]: Reloading requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:07:51.288098 systemd[1]: Reloading... Oct 9 01:07:51.312484 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:07:51.312793 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:07:51.313620 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:07:51.313867 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Oct 9 01:07:51.313927 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Oct 9 01:07:51.320373 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:07:51.320384 systemd-tmpfiles[1280]: Skipping /boot Oct 9 01:07:51.331522 systemd-udevd[1282]: Using default interface naming scheme 'v255'. Oct 9 01:07:51.338055 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:07:51.338067 systemd-tmpfiles[1280]: Skipping /boot Oct 9 01:07:51.377099 zram_generator::config[1307]: No configuration found. Oct 9 01:07:51.481049 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1333) Oct 9 01:07:51.511109 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:07:51.543098 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1333) Oct 9 01:07:51.555175 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 9 01:07:51.562898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:07:51.573051 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1317) Oct 9 01:07:51.588089 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:07:51.611229 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 01:07:51.611492 systemd[1]: Reloading finished in 323 ms. Oct 9 01:07:51.626061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:51.626882 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:51.644476 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 9 01:07:51.648505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.653204 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:07:51.655257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:07:51.655979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:51.658336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:07:51.660148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:07:51.664176 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:07:51.664752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:51.666567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:07:51.671263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:07:51.680254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:07:51.690420 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:07:51.692368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.698248 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:07:51.712208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.712379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:51.712534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:51.712614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.719042 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Oct 9 01:07:51.720832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.721050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:51.721209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:51.721285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.727916 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 01:07:51.728162 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 01:07:51.728332 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 01:07:51.724802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.724979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:51.742337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:07:51.743657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:51.743992 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:07:51.745839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:07:51.746282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:07:51.754830 systemd[1]: Finished ensure-sysext.service. Oct 9 01:07:51.764354 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:07:51.765506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:07:51.765874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:07:51.780804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:07:51.792580 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:07:51.801510 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:07:51.801713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:07:51.803461 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:07:51.803698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:07:51.806240 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:07:51.818050 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:07:51.826277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:07:51.834204 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:07:51.838498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:07:51.843114 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 9 01:07:51.843157 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 9 01:07:51.848162 kernel: Console: switching to colour dummy device 80x25 Oct 9 01:07:51.849086 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 01:07:51.849119 kernel: [drm] features: -context_init Oct 9 01:07:51.849258 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:07:51.850042 kernel: [drm] number of scanouts: 1 Oct 9 01:07:51.850103 kernel: [drm] number of cap sets: 0 Oct 9 01:07:51.854066 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 9 01:07:51.866049 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 01:07:51.866130 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 01:07:51.872763 augenrules[1428]: No rules Oct 9 01:07:51.877433 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 01:07:51.892879 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:07:51.894083 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:07:51.894276 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:07:51.894820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:07:51.913340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:51.913425 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:07:51.915109 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:07:51.920943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:07:51.923638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:07:51.923838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:51.935315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:51.959396 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:07:51.968435 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:07:51.991163 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:07:51.997642 systemd-networkd[1389]: lo: Link UP Oct 9 01:07:51.999591 systemd-networkd[1389]: lo: Gained carrier Oct 9 01:07:52.007431 systemd-networkd[1389]: Enumeration completed Oct 9 01:07:52.007538 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:07:52.011143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:07:52.011253 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:07:52.013315 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:52.014368 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:52.015147 systemd-networkd[1389]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:52.015151 systemd-networkd[1389]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:52.015694 systemd-networkd[1389]: eth0: Link UP Oct 9 01:07:52.015699 systemd-networkd[1389]: eth0: Gained carrier Oct 9 01:07:52.015709 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:52.019188 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:07:52.020355 systemd-networkd[1389]: eth1: Link UP Oct 9 01:07:52.020360 systemd-networkd[1389]: eth1: Gained carrier Oct 9 01:07:52.020380 systemd-networkd[1389]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:52.034425 systemd-resolved[1390]: Positive Trust Anchors: Oct 9 01:07:52.034443 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:07:52.034469 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:07:52.039000 systemd-resolved[1390]: Using system hostname 'ci-4116-0-0-2-50096a0261'. Oct 9 01:07:52.041127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:07:52.041268 systemd[1]: Reached target network.target - Network. Oct 9 01:07:52.041324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:52.043726 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:07:52.045206 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:52.052162 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:07:52.052540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:52.053424 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:07:52.055395 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:07:52.058828 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:07:52.058367 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:07:52.058996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:07:52.059211 systemd-networkd[1389]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:07:52.060569 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:07:52.061003 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:07:52.061377 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Oct 9 01:07:52.061444 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:07:52.061479 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:07:52.063701 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:07:52.068412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:07:52.075724 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:07:52.083189 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:07:52.086011 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:07:52.087790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:07:52.089231 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:07:52.090036 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:07:52.090835 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:07:52.090933 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:07:52.098165 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:07:52.100903 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 01:07:52.119610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:07:52.123255 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:07:52.124244 systemd-networkd[1389]: eth0: DHCPv4 address 188.245.175.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:07:52.127098 coreos-metadata[1463]: Oct 09 01:07:52.126 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 9 01:07:52.126418 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:07:52.127750 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:07:52.129186 coreos-metadata[1463]: Oct 09 01:07:52.128 INFO Fetch successful Oct 9 01:07:52.129420 coreos-metadata[1463]: Oct 09 01:07:52.129 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 9 01:07:52.129683 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Oct 9 01:07:52.131832 coreos-metadata[1463]: Oct 09 01:07:52.130 INFO Fetch successful Oct 9 01:07:52.136163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:07:52.145415 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:07:52.148072 jq[1467]: false Oct 9 01:07:52.148179 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 9 01:07:52.159473 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:07:52.163090 extend-filesystems[1468]: Found loop4 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found loop5 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found loop6 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found loop7 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda1 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda2 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda3 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found usr Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda4 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda6 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda7 Oct 9 01:07:52.165154 extend-filesystems[1468]: Found sda9 Oct 9 01:07:52.165154 extend-filesystems[1468]: Checking size of /dev/sda9 Oct 9 01:07:52.170118 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:07:52.188894 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:07:52.195444 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:07:52.195989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:07:52.202908 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:07:52.208467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:07:52.212474 extend-filesystems[1468]: Resized partition /dev/sda9 Oct 9 01:07:52.229423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:07:52.230103 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:07:52.230422 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:07:52.230606 dbus-daemon[1464]: [system] SELinux support is enabled Oct 9 01:07:52.231017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:07:52.232693 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:07:52.236721 update_engine[1486]: I20241009 01:07:52.236373 1486 main.cc:92] Flatcar Update Engine starting Oct 9 01:07:52.237938 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:07:52.251412 update_engine[1486]: I20241009 01:07:52.248533 1486 update_check_scheduler.cc:74] Next update check in 5m22s Oct 9 01:07:52.257054 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 9 01:07:52.262659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:07:52.265080 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:07:52.282496 jq[1487]: true Oct 9 01:07:52.292434 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:07:52.306446 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:07:52.310809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:07:52.311888 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:07:52.313150 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:07:52.313169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:07:52.326600 tar[1495]: linux-amd64/helm Oct 9 01:07:52.324180 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:07:52.338563 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1312) Oct 9 01:07:52.339605 systemd-logind[1482]: New seat seat0. Oct 9 01:07:52.342663 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Oct 9 01:07:52.342687 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:07:52.342886 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:07:52.371279 jq[1508]: true Oct 9 01:07:52.417450 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 01:07:52.420942 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:07:52.491286 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:07:52.542304 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:07:52.545613 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:07:52.559722 systemd[1]: Starting sshkeys.service... Oct 9 01:07:52.570968 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 9 01:07:52.585001 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 01:07:52.594432 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:07:52.594540 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 9 01:07:52.594540 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 9 01:07:52.594540 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 9 01:07:52.610763 extend-filesystems[1468]: Resized filesystem in /dev/sda9 Oct 9 01:07:52.610763 extend-filesystems[1468]: Found sr0 Oct 9 01:07:52.594917 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 01:07:52.604481 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:07:52.604672 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:07:52.620391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:07:52.626658 containerd[1498]: time="2024-10-09T01:07:52.626565767Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:07:52.627390 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:07:52.646795 coreos-metadata[1545]: Oct 09 01:07:52.646 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 9 01:07:52.648210 coreos-metadata[1545]: Oct 09 01:07:52.648 INFO Fetch successful Oct 9 01:07:52.654389 unknown[1545]: wrote ssh authorized keys file for user: core Oct 9 01:07:52.656710 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:07:52.657111 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:07:52.665555 containerd[1498]: time="2024-10-09T01:07:52.665515097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669267815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669292531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669312478Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669473710Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669487817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669555013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669566364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669722086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669734500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669746643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:52.669873 containerd[1498]: time="2024-10-09T01:07:52.669754237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.670218 containerd[1498]: time="2024-10-09T01:07:52.669839848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.670218 containerd[1498]: time="2024-10-09T01:07:52.670084667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:52.670218 containerd[1498]: time="2024-10-09T01:07:52.670182951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:52.670218 containerd[1498]: time="2024-10-09T01:07:52.670194021Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:07:52.670331 containerd[1498]: time="2024-10-09T01:07:52.670284181Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:07:52.670365 containerd[1498]: time="2024-10-09T01:07:52.670337230Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:07:52.674504 containerd[1498]: time="2024-10-09T01:07:52.674472817Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:07:52.674540 containerd[1498]: time="2024-10-09T01:07:52.674520376Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:07:52.674540 containerd[1498]: time="2024-10-09T01:07:52.674535224Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:07:52.674598 containerd[1498]: time="2024-10-09T01:07:52.674548609Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:07:52.674598 containerd[1498]: time="2024-10-09T01:07:52.674560992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:07:52.674699 containerd[1498]: time="2024-10-09T01:07:52.674671249Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:07:52.674875 containerd[1498]: time="2024-10-09T01:07:52.674851777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:07:52.675062 containerd[1498]: time="2024-10-09T01:07:52.674955111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:07:52.675062 containerd[1498]: time="2024-10-09T01:07:52.674972293Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:07:52.675062 containerd[1498]: time="2024-10-09T01:07:52.674983805Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:07:52.675062 containerd[1498]: time="2024-10-09T01:07:52.674995928Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675062 containerd[1498]: time="2024-10-09T01:07:52.675006859Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675036744Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675084183Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675096627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675107266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675116865Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675125621Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675141311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675157 containerd[1498]: time="2024-10-09T01:07:52.675158452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675169513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675180073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675190102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675200672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675209699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675219297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675229476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675240897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675254633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675264241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675274 containerd[1498]: time="2024-10-09T01:07:52.675274280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675286633Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675303825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675313834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675322090Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675370079Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675384817Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675393774Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675403311Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675410565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675420263Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675428770Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:07:52.675726 containerd[1498]: time="2024-10-09T01:07:52.675436814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:07:52.675921 containerd[1498]: time="2024-10-09T01:07:52.675720878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:07:52.675921 containerd[1498]: time="2024-10-09T01:07:52.675762165Z" level=info msg="Connect containerd service" Oct 9 01:07:52.675921 containerd[1498]: time="2024-10-09T01:07:52.675789296Z" level=info msg="using legacy CRI server" Oct 9 01:07:52.675921 containerd[1498]: time="2024-10-09T01:07:52.675795107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:07:52.675921 containerd[1498]: time="2024-10-09T01:07:52.675858105Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676505890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676624342Z" level=info msg="Start subscribing containerd event" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676653837Z" level=info msg="Start recovering state" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676700365Z" level=info msg="Start event monitor" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676714160Z" level=info msg="Start snapshots syncer" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676721734Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:07:52.677133 containerd[1498]: time="2024-10-09T01:07:52.676728007Z" level=info msg="Start streaming server" Oct 9 01:07:52.678601 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:07:52.681206 containerd[1498]: time="2024-10-09T01:07:52.679680614Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:07:52.681206 containerd[1498]: time="2024-10-09T01:07:52.679749823Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:07:52.681206 containerd[1498]: time="2024-10-09T01:07:52.679862646Z" level=info msg="containerd successfully booted in 0.056519s" Oct 9 01:07:52.681210 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:07:52.691787 update-ssh-keys[1562]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:07:52.693388 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 01:07:52.697388 systemd[1]: Finished sshkeys.service. Oct 9 01:07:52.707172 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:07:52.717593 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:07:52.722235 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:07:52.722845 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:07:52.986318 tar[1495]: linux-amd64/LICENSE Oct 9 01:07:52.986397 tar[1495]: linux-amd64/README.md Oct 9 01:07:52.996973 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:07:53.417204 systemd-networkd[1389]: eth0: Gained IPv6LL Oct 9 01:07:53.418306 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Oct 9 01:07:53.420515 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:07:53.422605 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:07:53.430350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:07:53.435186 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:07:53.462082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:07:53.545453 systemd-networkd[1389]: eth1: Gained IPv6LL Oct 9 01:07:53.545935 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Oct 9 01:07:54.130700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:07:54.131955 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:07:54.137692 systemd[1]: Startup finished in 1.219s (kernel) + 4.962s (initrd) + 4.516s (userspace) = 10.699s. Oct 9 01:07:54.138346 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:07:54.709874 kubelet[1592]: E1009 01:07:54.709772 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:07:54.714402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:07:54.714588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:04.964857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:08:04.970315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:05.092224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:05.104385 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:05.145163 kubelet[1611]: E1009 01:08:05.145104 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:05.153134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:05.153319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:15.403650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:08:15.409171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:15.518857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:15.522836 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:15.562118 kubelet[1628]: E1009 01:08:15.562066 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:15.564878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:15.565102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:23.837853 systemd-timesyncd[1410]: Contacted time server 167.71.55.144:123 (2.flatcar.pool.ntp.org). Oct 9 01:08:23.837914 systemd-timesyncd[1410]: Initial clock synchronization to Wed 2024-10-09 01:08:23.489892 UTC. Oct 9 01:08:25.711566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 01:08:25.718221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:25.837402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:25.840997 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:25.882474 kubelet[1644]: E1009 01:08:25.882364 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:25.886125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:25.886347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:35.961466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 01:08:35.967220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:36.085866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:36.093305 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:36.132553 kubelet[1661]: E1009 01:08:36.132488 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:36.136432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:36.136631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:37.460131 update_engine[1486]: I20241009 01:08:37.460014 1486 update_attempter.cc:509] Updating boot flags... Oct 9 01:08:37.499074 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1679) Oct 9 01:08:37.551607 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1678) Oct 9 01:08:37.596133 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1678) Oct 9 01:08:46.211371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 01:08:46.223198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:46.349318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:46.349329 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:46.388667 kubelet[1699]: E1009 01:08:46.388591 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:46.393525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:46.393717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:49.296631 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:08:49.298126 systemd[1]: Started sshd@0-188.245.175.223:22-139.178.68.195:51756.service - OpenSSH per-connection server daemon (139.178.68.195:51756). Oct 9 01:08:50.300433 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 51756 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:50.302981 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:50.311880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:08:50.318251 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:08:50.321184 systemd-logind[1482]: New session 1 of user core. Oct 9 01:08:50.334481 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:08:50.342266 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:08:50.346709 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:08:50.443895 systemd[1713]: Queued start job for default target default.target. Oct 9 01:08:50.450177 systemd[1713]: Created slice app.slice - User Application Slice. Oct 9 01:08:50.450203 systemd[1713]: Reached target paths.target - Paths. Oct 9 01:08:50.450215 systemd[1713]: Reached target timers.target - Timers. Oct 9 01:08:50.451575 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:08:50.464195 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:08:50.464307 systemd[1713]: Reached target sockets.target - Sockets. Oct 9 01:08:50.464321 systemd[1713]: Reached target basic.target - Basic System. Oct 9 01:08:50.464357 systemd[1713]: Reached target default.target - Main User Target. Oct 9 01:08:50.464388 systemd[1713]: Startup finished in 110ms. Oct 9 01:08:50.464477 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:08:50.471176 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:08:51.172082 systemd[1]: Started sshd@1-188.245.175.223:22-139.178.68.195:48728.service - OpenSSH per-connection server daemon (139.178.68.195:48728). Oct 9 01:08:52.161575 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 48728 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:52.163403 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:52.168042 systemd-logind[1482]: New session 2 of user core. Oct 9 01:08:52.177201 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:08:52.851704 sshd[1724]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:52.854642 systemd[1]: sshd@1-188.245.175.223:22-139.178.68.195:48728.service: Deactivated successfully. Oct 9 01:08:52.856535 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:08:52.858220 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:08:52.859478 systemd-logind[1482]: Removed session 2. Oct 9 01:08:53.027378 systemd[1]: Started sshd@2-188.245.175.223:22-139.178.68.195:48738.service - OpenSSH per-connection server daemon (139.178.68.195:48738). Oct 9 01:08:54.011095 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 48738 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:54.012685 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:54.017172 systemd-logind[1482]: New session 3 of user core. Oct 9 01:08:54.030179 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:08:54.695956 sshd[1731]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:54.700504 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:08:54.701528 systemd[1]: sshd@2-188.245.175.223:22-139.178.68.195:48738.service: Deactivated successfully. Oct 9 01:08:54.703772 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:08:54.704904 systemd-logind[1482]: Removed session 3. Oct 9 01:08:54.866611 systemd[1]: Started sshd@3-188.245.175.223:22-139.178.68.195:48752.service - OpenSSH per-connection server daemon (139.178.68.195:48752). Oct 9 01:08:55.867123 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 48752 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:55.868842 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:55.873586 systemd-logind[1482]: New session 4 of user core. Oct 9 01:08:55.879172 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:08:56.397310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 9 01:08:56.410643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:56.532783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:56.536500 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:56.560675 sshd[1738]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:56.563648 systemd[1]: sshd@3-188.245.175.223:22-139.178.68.195:48752.service: Deactivated successfully. Oct 9 01:08:56.565326 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:08:56.567265 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:08:56.568626 systemd-logind[1482]: Removed session 4. Oct 9 01:08:56.580638 kubelet[1750]: E1009 01:08:56.580549 1750 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:56.583764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:56.583945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:56.731658 systemd[1]: Started sshd@4-188.245.175.223:22-139.178.68.195:48764.service - OpenSSH per-connection server daemon (139.178.68.195:48764). Oct 9 01:08:57.719415 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 48764 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:57.721125 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:57.726233 systemd-logind[1482]: New session 5 of user core. Oct 9 01:08:57.740179 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:08:58.254962 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:08:58.255507 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:08:58.268570 sudo[1765]: pam_unix(sudo:session): session closed for user root Oct 9 01:08:58.429965 sshd[1762]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:58.432667 systemd[1]: sshd@4-188.245.175.223:22-139.178.68.195:48764.service: Deactivated successfully. Oct 9 01:08:58.434369 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:08:58.435709 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:08:58.436787 systemd-logind[1482]: Removed session 5. Oct 9 01:08:58.601139 systemd[1]: Started sshd@5-188.245.175.223:22-139.178.68.195:48778.service - OpenSSH per-connection server daemon (139.178.68.195:48778). Oct 9 01:08:59.593290 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 48778 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:08:59.594973 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:59.599960 systemd-logind[1482]: New session 6 of user core. Oct 9 01:08:59.615226 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:09:00.124594 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:09:00.124970 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:00.128757 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:00.134139 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:09:00.134432 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:00.146291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:09:00.172336 augenrules[1796]: No rules Oct 9 01:09:00.173153 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:09:00.173404 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:09:00.174926 sudo[1773]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:00.336874 sshd[1770]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:00.340807 systemd[1]: sshd@5-188.245.175.223:22-139.178.68.195:48778.service: Deactivated successfully. Oct 9 01:09:00.342498 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:09:00.343186 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:09:00.344185 systemd-logind[1482]: Removed session 6. Oct 9 01:09:00.507435 systemd[1]: Started sshd@6-188.245.175.223:22-139.178.68.195:48794.service - OpenSSH per-connection server daemon (139.178.68.195:48794). Oct 9 01:09:01.502089 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 48794 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:01.503480 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:01.507071 systemd-logind[1482]: New session 7 of user core. Oct 9 01:09:01.517155 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:09:02.032387 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:09:02.032742 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:09:02.289247 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:09:02.290459 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:09:02.511589 dockerd[1824]: time="2024-10-09T01:09:02.511540148Z" level=info msg="Starting up" Oct 9 01:09:02.601301 dockerd[1824]: time="2024-10-09T01:09:02.601158553Z" level=info msg="Loading containers: start." Oct 9 01:09:02.760058 kernel: Initializing XFRM netlink socket Oct 9 01:09:02.839441 systemd-networkd[1389]: docker0: Link UP Oct 9 01:09:02.864806 dockerd[1824]: time="2024-10-09T01:09:02.864670871Z" level=info msg="Loading containers: done." Oct 9 01:09:02.881431 dockerd[1824]: time="2024-10-09T01:09:02.881332330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:09:02.882012 dockerd[1824]: time="2024-10-09T01:09:02.881600121Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:09:02.882012 dockerd[1824]: time="2024-10-09T01:09:02.881713825Z" level=info msg="Daemon has completed initialization" Oct 9 01:09:02.911635 dockerd[1824]: time="2024-10-09T01:09:02.910630229Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:09:02.911357 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:09:03.825867 containerd[1498]: time="2024-10-09T01:09:03.825742196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 01:09:04.398937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444049194.mount: Deactivated successfully. Oct 9 01:09:05.400909 containerd[1498]: time="2024-10-09T01:09:05.400851810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:05.401852 containerd[1498]: time="2024-10-09T01:09:05.401791748Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213933" Oct 9 01:09:05.402495 containerd[1498]: time="2024-10-09T01:09:05.402454092Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:05.404676 containerd[1498]: time="2024-10-09T01:09:05.404618837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:05.405743 containerd[1498]: time="2024-10-09T01:09:05.405575058Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 1.579794524s" Oct 9 01:09:05.405743 containerd[1498]: time="2024-10-09T01:09:05.405604788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 01:09:05.425956 containerd[1498]: time="2024-10-09T01:09:05.425909083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 01:09:06.711206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 9 01:09:06.721539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:06.819995 containerd[1498]: time="2024-10-09T01:09:06.819817849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:06.831971 containerd[1498]: time="2024-10-09T01:09:06.831929959Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208693" Oct 9 01:09:06.833948 containerd[1498]: time="2024-10-09T01:09:06.833723467Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:06.843302 containerd[1498]: time="2024-10-09T01:09:06.843269788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:06.844752 containerd[1498]: time="2024-10-09T01:09:06.844612593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.418667468s" Oct 9 01:09:06.844934 containerd[1498]: time="2024-10-09T01:09:06.844865284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 01:09:06.871277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:06.872200 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:09:06.874512 containerd[1498]: time="2024-10-09T01:09:06.874294969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 01:09:06.917635 kubelet[2090]: E1009 01:09:06.917599 2090 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:09:06.921548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:09:06.921731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:09:07.958700 containerd[1498]: time="2024-10-09T01:09:07.958642246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:07.959719 containerd[1498]: time="2024-10-09T01:09:07.959517871Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320476" Oct 9 01:09:07.960403 containerd[1498]: time="2024-10-09T01:09:07.960362742Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:07.962500 containerd[1498]: time="2024-10-09T01:09:07.962456681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:07.963632 containerd[1498]: time="2024-10-09T01:09:07.963518580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.08918887s" Oct 9 01:09:07.963632 containerd[1498]: time="2024-10-09T01:09:07.963554592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 01:09:07.984540 containerd[1498]: time="2024-10-09T01:09:07.984504548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 01:09:09.153409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054330576.mount: Deactivated successfully. Oct 9 01:09:09.463945 containerd[1498]: time="2024-10-09T01:09:09.463893243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:09.464879 containerd[1498]: time="2024-10-09T01:09:09.464746871Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601776" Oct 9 01:09:09.465521 containerd[1498]: time="2024-10-09T01:09:09.465466392Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:09.467123 containerd[1498]: time="2024-10-09T01:09:09.467082135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:09.467985 containerd[1498]: time="2024-10-09T01:09:09.467587797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.483047958s" Oct 9 01:09:09.467985 containerd[1498]: time="2024-10-09T01:09:09.467625231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 01:09:09.489308 containerd[1498]: time="2024-10-09T01:09:09.489253062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:09:09.998494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657198063.mount: Deactivated successfully. Oct 9 01:09:10.780554 containerd[1498]: time="2024-10-09T01:09:10.780493330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:10.781472 containerd[1498]: time="2024-10-09T01:09:10.781430550Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 9 01:09:10.782134 containerd[1498]: time="2024-10-09T01:09:10.782073383Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:10.784738 containerd[1498]: time="2024-10-09T01:09:10.784681106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:10.785766 containerd[1498]: time="2024-10-09T01:09:10.785655008Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.296357709s" Oct 9 01:09:10.785766 containerd[1498]: time="2024-10-09T01:09:10.785681661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:09:10.806949 containerd[1498]: time="2024-10-09T01:09:10.806840996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:09:11.314255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965704444.mount: Deactivated successfully. Oct 9 01:09:11.318900 containerd[1498]: time="2024-10-09T01:09:11.318853164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:11.319549 containerd[1498]: time="2024-10-09T01:09:11.319510442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 9 01:09:11.320284 containerd[1498]: time="2024-10-09T01:09:11.320245253Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:11.322323 containerd[1498]: time="2024-10-09T01:09:11.322288368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:11.322938 containerd[1498]: time="2024-10-09T01:09:11.322821489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 515.876626ms" Oct 9 01:09:11.322938 containerd[1498]: time="2024-10-09T01:09:11.322844955Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:09:11.342665 containerd[1498]: time="2024-10-09T01:09:11.342617860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 01:09:11.824613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808615760.mount: Deactivated successfully. Oct 9 01:09:14.384794 containerd[1498]: time="2024-10-09T01:09:14.384591630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:14.385990 containerd[1498]: time="2024-10-09T01:09:14.385949811Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Oct 9 01:09:14.386715 containerd[1498]: time="2024-10-09T01:09:14.386686426Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:14.389212 containerd[1498]: time="2024-10-09T01:09:14.389177134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:14.390790 containerd[1498]: time="2024-10-09T01:09:14.390655071Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.047997272s" Oct 9 01:09:14.390790 containerd[1498]: time="2024-10-09T01:09:14.390689138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 01:09:16.383616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:16.395301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:16.415238 systemd[1]: Reloading requested from client PID 2294 ('systemctl') (unit session-7.scope)... Oct 9 01:09:16.415257 systemd[1]: Reloading... Oct 9 01:09:16.542052 zram_generator::config[2337]: No configuration found. Oct 9 01:09:16.636225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:09:16.702164 systemd[1]: Reloading finished in 286 ms. Oct 9 01:09:16.747191 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:09:16.747491 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:09:16.747928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:16.754426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:16.880833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:16.886189 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:09:16.939247 kubelet[2388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:16.939913 kubelet[2388]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:09:16.939975 kubelet[2388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:16.940134 kubelet[2388]: I1009 01:09:16.940096 2388 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:09:17.179661 kubelet[2388]: I1009 01:09:17.179616 2388 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:09:17.179661 kubelet[2388]: I1009 01:09:17.179643 2388 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:09:17.179860 kubelet[2388]: I1009 01:09:17.179834 2388 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:09:17.200993 kubelet[2388]: I1009 01:09:17.200844 2388 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:09:17.206309 kubelet[2388]: E1009 01:09:17.205628 2388 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.175.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.217263 kubelet[2388]: I1009 01:09:17.217200 2388 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:09:17.219115 kubelet[2388]: I1009 01:09:17.219079 2388 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:09:17.220237 kubelet[2388]: I1009 01:09:17.220188 2388 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:09:17.220237 kubelet[2388]: I1009 01:09:17.220223 2388 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:09:17.220237 kubelet[2388]: I1009 01:09:17.220235 2388 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:09:17.220442 kubelet[2388]: I1009 01:09:17.220362 2388 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:17.220483 kubelet[2388]: I1009 01:09:17.220470 2388 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:09:17.220520 kubelet[2388]: I1009 01:09:17.220487 2388 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:09:17.220520 kubelet[2388]: I1009 01:09:17.220517 2388 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:09:17.220594 kubelet[2388]: I1009 01:09:17.220533 2388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:09:17.222326 kubelet[2388]: W1009 01:09:17.222097 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.175.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.222326 kubelet[2388]: E1009 01:09:17.222148 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.175.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.222326 kubelet[2388]: W1009 01:09:17.222209 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.175.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-2-50096a0261&limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.222326 kubelet[2388]: E1009 01:09:17.222228 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.175.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-2-50096a0261&limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.222733 kubelet[2388]: I1009 01:09:17.222708 2388 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:09:17.226822 kubelet[2388]: I1009 01:09:17.226782 2388 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:09:17.226901 kubelet[2388]: W1009 01:09:17.226867 2388 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:09:17.227897 kubelet[2388]: I1009 01:09:17.227483 2388 server.go:1256] "Started kubelet" Oct 9 01:09:17.228951 kubelet[2388]: I1009 01:09:17.228602 2388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:09:17.236092 kubelet[2388]: E1009 01:09:17.235962 2388 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.175.223:6443/api/v1/namespaces/default/events\": dial tcp 188.245.175.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116-0-0-2-50096a0261.17fca387007becf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-2-50096a0261,UID:ci-4116-0-0-2-50096a0261,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-2-50096a0261,},FirstTimestamp:2024-10-09 01:09:17.227461874 +0000 UTC m=+0.337015353,LastTimestamp:2024-10-09 01:09:17.227461874 +0000 UTC m=+0.337015353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-2-50096a0261,}" Oct 9 01:09:17.238225 kubelet[2388]: I1009 01:09:17.238084 2388 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:09:17.240281 kubelet[2388]: I1009 01:09:17.238730 2388 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:09:17.240281 kubelet[2388]: I1009 01:09:17.238902 2388 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:09:17.240281 kubelet[2388]: I1009 01:09:17.239893 2388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:09:17.240281 kubelet[2388]: I1009 01:09:17.240077 2388 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:09:17.242213 kubelet[2388]: E1009 01:09:17.242185 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-2-50096a0261?timeout=10s\": dial tcp 188.245.175.223:6443: connect: connection refused" interval="200ms" Oct 9 01:09:17.242365 kubelet[2388]: I1009 01:09:17.242349 2388 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:09:17.242867 kubelet[2388]: W1009 01:09:17.242710 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.175.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.242867 kubelet[2388]: E1009 01:09:17.242772 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.175.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.242935 kubelet[2388]: I1009 01:09:17.242922 2388 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:09:17.244454 kubelet[2388]: I1009 01:09:17.244429 2388 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:09:17.244504 kubelet[2388]: I1009 01:09:17.244494 2388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:09:17.246253 kubelet[2388]: I1009 01:09:17.246230 2388 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:09:17.251841 kubelet[2388]: I1009 01:09:17.251820 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:09:17.252930 kubelet[2388]: I1009 01:09:17.252917 2388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:09:17.252995 kubelet[2388]: I1009 01:09:17.252986 2388 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:09:17.253124 kubelet[2388]: I1009 01:09:17.253087 2388 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:09:17.253254 kubelet[2388]: E1009 01:09:17.253243 2388 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:09:17.261221 kubelet[2388]: W1009 01:09:17.261190 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.175.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.261306 kubelet[2388]: E1009 01:09:17.261296 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.175.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:17.277161 kubelet[2388]: E1009 01:09:17.277133 2388 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:09:17.282798 kubelet[2388]: I1009 01:09:17.282772 2388 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:09:17.282866 kubelet[2388]: I1009 01:09:17.282818 2388 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:09:17.282866 kubelet[2388]: I1009 01:09:17.282833 2388 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:17.284705 kubelet[2388]: I1009 01:09:17.284668 2388 policy_none.go:49] "None policy: Start" Oct 9 01:09:17.285307 kubelet[2388]: I1009 01:09:17.285291 2388 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:09:17.285467 kubelet[2388]: I1009 01:09:17.285398 2388 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:09:17.292532 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:09:17.308614 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:09:17.316583 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:09:17.332922 kubelet[2388]: I1009 01:09:17.332888 2388 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:09:17.333391 kubelet[2388]: I1009 01:09:17.333175 2388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:09:17.334655 kubelet[2388]: E1009 01:09:17.334459 2388 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:17.341269 kubelet[2388]: I1009 01:09:17.341233 2388 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.341639 kubelet[2388]: E1009 01:09:17.341597 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.223:6443/api/v1/nodes\": dial tcp 188.245.175.223:6443: connect: connection refused" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.354154 kubelet[2388]: I1009 01:09:17.354113 2388 topology_manager.go:215] "Topology Admit Handler" podUID="8593bef8f9d8e422ddeff978afb61014" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.356779 kubelet[2388]: I1009 01:09:17.356754 2388 topology_manager.go:215] "Topology Admit Handler" podUID="aad7581e5ea41e87e1279b6e1a5e679e" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.358157 kubelet[2388]: I1009 01:09:17.357958 2388 topology_manager.go:215] "Topology Admit Handler" podUID="d79132f28a65594a69a939efae1f50c7" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.364121 systemd[1]: Created slice kubepods-burstable-pod8593bef8f9d8e422ddeff978afb61014.slice - libcontainer container kubepods-burstable-pod8593bef8f9d8e422ddeff978afb61014.slice. Oct 9 01:09:17.381084 systemd[1]: Created slice kubepods-burstable-podaad7581e5ea41e87e1279b6e1a5e679e.slice - libcontainer container kubepods-burstable-podaad7581e5ea41e87e1279b6e1a5e679e.slice. Oct 9 01:09:17.385386 systemd[1]: Created slice kubepods-burstable-podd79132f28a65594a69a939efae1f50c7.slice - libcontainer container kubepods-burstable-podd79132f28a65594a69a939efae1f50c7.slice. Oct 9 01:09:17.443670 kubelet[2388]: E1009 01:09:17.443624 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-2-50096a0261?timeout=10s\": dial tcp 188.245.175.223:6443: connect: connection refused" interval="400ms" Oct 9 01:09:17.444747 kubelet[2388]: I1009 01:09:17.444688 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.444747 kubelet[2388]: I1009 01:09:17.444726 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.444747 kubelet[2388]: I1009 01:09:17.444747 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.444747 kubelet[2388]: I1009 01:09:17.444764 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.445014 kubelet[2388]: I1009 01:09:17.444780 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.445014 kubelet[2388]: I1009 01:09:17.444797 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.445014 kubelet[2388]: I1009 01:09:17.444813 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.445014 kubelet[2388]: I1009 01:09:17.444830 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.445014 kubelet[2388]: I1009 01:09:17.444852 2388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aad7581e5ea41e87e1279b6e1a5e679e-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-2-50096a0261\" (UID: \"aad7581e5ea41e87e1279b6e1a5e679e\") " pod="kube-system/kube-scheduler-ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.544064 kubelet[2388]: I1009 01:09:17.543994 2388 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.544474 kubelet[2388]: E1009 01:09:17.544427 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.223:6443/api/v1/nodes\": dial tcp 188.245.175.223:6443: connect: connection refused" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.679325 containerd[1498]: time="2024-10-09T01:09:17.679257557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-2-50096a0261,Uid:8593bef8f9d8e422ddeff978afb61014,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:17.687050 containerd[1498]: time="2024-10-09T01:09:17.686997403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-2-50096a0261,Uid:aad7581e5ea41e87e1279b6e1a5e679e,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:17.688565 containerd[1498]: time="2024-10-09T01:09:17.688305839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-2-50096a0261,Uid:d79132f28a65594a69a939efae1f50c7,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:17.844804 kubelet[2388]: E1009 01:09:17.844692 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-2-50096a0261?timeout=10s\": dial tcp 188.245.175.223:6443: connect: connection refused" interval="800ms" Oct 9 01:09:17.947334 kubelet[2388]: I1009 01:09:17.947290 2388 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:17.947790 kubelet[2388]: E1009 01:09:17.947653 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.223:6443/api/v1/nodes\": dial tcp 188.245.175.223:6443: connect: connection refused" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:18.184083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013918305.mount: Deactivated successfully. Oct 9 01:09:18.191299 containerd[1498]: time="2024-10-09T01:09:18.191249547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:18.192015 containerd[1498]: time="2024-10-09T01:09:18.191983464Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:18.192734 containerd[1498]: time="2024-10-09T01:09:18.192685809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:09:18.193298 containerd[1498]: time="2024-10-09T01:09:18.193260014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:09:18.193897 containerd[1498]: time="2024-10-09T01:09:18.193857554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:18.194701 containerd[1498]: time="2024-10-09T01:09:18.194664324Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:18.195243 containerd[1498]: time="2024-10-09T01:09:18.195211364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 9 01:09:18.197481 containerd[1498]: time="2024-10-09T01:09:18.197405892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:09:18.199475 containerd[1498]: time="2024-10-09T01:09:18.199450296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.072463ms" Oct 9 01:09:18.200890 containerd[1498]: time="2024-10-09T01:09:18.200790440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.690677ms" Oct 9 01:09:18.204818 containerd[1498]: time="2024-10-09T01:09:18.204650358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.279381ms" Oct 9 01:09:18.236543 kubelet[2388]: W1009 01:09:18.236470 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.175.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.236543 kubelet[2388]: E1009 01:09:18.236542 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.175.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.322263 containerd[1498]: time="2024-10-09T01:09:18.322067888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:18.322263 containerd[1498]: time="2024-10-09T01:09:18.322117104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:18.322263 containerd[1498]: time="2024-10-09T01:09:18.322129148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.322263 containerd[1498]: time="2024-10-09T01:09:18.322195838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.323280 containerd[1498]: time="2024-10-09T01:09:18.320341437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:18.323280 containerd[1498]: time="2024-10-09T01:09:18.323159795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:18.323280 containerd[1498]: time="2024-10-09T01:09:18.323171719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.323280 containerd[1498]: time="2024-10-09T01:09:18.323231226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.323747 containerd[1498]: time="2024-10-09T01:09:18.323695595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:18.324094 containerd[1498]: time="2024-10-09T01:09:18.323911427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:18.324094 containerd[1498]: time="2024-10-09T01:09:18.323939944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.327954 containerd[1498]: time="2024-10-09T01:09:18.327671500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:18.347537 systemd[1]: Started cri-containerd-81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2.scope - libcontainer container 81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2. Oct 9 01:09:18.353066 systemd[1]: Started cri-containerd-00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b.scope - libcontainer container 00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b. Oct 9 01:09:18.357682 systemd[1]: Started cri-containerd-d5b82cab60350d8aa4d313ffd91f5c627687e6ccaf6b6bb9a2c275949b789c1d.scope - libcontainer container d5b82cab60350d8aa4d313ffd91f5c627687e6ccaf6b6bb9a2c275949b789c1d. Oct 9 01:09:18.410698 containerd[1498]: time="2024-10-09T01:09:18.410664840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-2-50096a0261,Uid:d79132f28a65594a69a939efae1f50c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5b82cab60350d8aa4d313ffd91f5c627687e6ccaf6b6bb9a2c275949b789c1d\"" Oct 9 01:09:18.416017 containerd[1498]: time="2024-10-09T01:09:18.415787139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-2-50096a0261,Uid:8593bef8f9d8e422ddeff978afb61014,Namespace:kube-system,Attempt:0,} returns sandbox id \"81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2\"" Oct 9 01:09:18.417504 containerd[1498]: time="2024-10-09T01:09:18.417010715Z" level=info msg="CreateContainer within sandbox \"d5b82cab60350d8aa4d313ffd91f5c627687e6ccaf6b6bb9a2c275949b789c1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:09:18.420469 containerd[1498]: time="2024-10-09T01:09:18.420271441Z" level=info msg="CreateContainer within sandbox \"81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:09:18.437628 containerd[1498]: time="2024-10-09T01:09:18.437302161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-2-50096a0261,Uid:aad7581e5ea41e87e1279b6e1a5e679e,Namespace:kube-system,Attempt:0,} returns sandbox id \"00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b\"" Oct 9 01:09:18.440875 containerd[1498]: time="2024-10-09T01:09:18.440839117Z" level=info msg="CreateContainer within sandbox \"00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:09:18.448782 containerd[1498]: time="2024-10-09T01:09:18.448740029Z" level=info msg="CreateContainer within sandbox \"d5b82cab60350d8aa4d313ffd91f5c627687e6ccaf6b6bb9a2c275949b789c1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81f0bf2a1406ecff299ff1fd883af93c4ed482db0902f7650932767362f44b8f\"" Oct 9 01:09:18.449248 containerd[1498]: time="2024-10-09T01:09:18.449214918Z" level=info msg="StartContainer for \"81f0bf2a1406ecff299ff1fd883af93c4ed482db0902f7650932767362f44b8f\"" Oct 9 01:09:18.453724 containerd[1498]: time="2024-10-09T01:09:18.453629313Z" level=info msg="CreateContainer within sandbox \"81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5\"" Oct 9 01:09:18.454086 containerd[1498]: time="2024-10-09T01:09:18.454000860Z" level=info msg="StartContainer for \"7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5\"" Oct 9 01:09:18.460070 containerd[1498]: time="2024-10-09T01:09:18.459993575Z" level=info msg="CreateContainer within sandbox \"00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2\"" Oct 9 01:09:18.460412 containerd[1498]: time="2024-10-09T01:09:18.460391885Z" level=info msg="StartContainer for \"792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2\"" Oct 9 01:09:18.483372 systemd[1]: Started cri-containerd-81f0bf2a1406ecff299ff1fd883af93c4ed482db0902f7650932767362f44b8f.scope - libcontainer container 81f0bf2a1406ecff299ff1fd883af93c4ed482db0902f7650932767362f44b8f. Oct 9 01:09:18.490290 systemd[1]: Started cri-containerd-7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5.scope - libcontainer container 7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5. Oct 9 01:09:18.512717 kubelet[2388]: W1009 01:09:18.511509 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.175.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.512717 kubelet[2388]: E1009 01:09:18.511565 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.175.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.519913 systemd[1]: Started cri-containerd-792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2.scope - libcontainer container 792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2. Oct 9 01:09:18.544808 containerd[1498]: time="2024-10-09T01:09:18.544762111Z" level=info msg="StartContainer for \"7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5\" returns successfully" Oct 9 01:09:18.581696 containerd[1498]: time="2024-10-09T01:09:18.581396859Z" level=info msg="StartContainer for \"81f0bf2a1406ecff299ff1fd883af93c4ed482db0902f7650932767362f44b8f\" returns successfully" Oct 9 01:09:18.590717 containerd[1498]: time="2024-10-09T01:09:18.590670659Z" level=info msg="StartContainer for \"792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2\" returns successfully" Oct 9 01:09:18.613694 kubelet[2388]: W1009 01:09:18.613603 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.175.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-2-50096a0261&limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.613694 kubelet[2388]: E1009 01:09:18.613671 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.175.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-2-50096a0261&limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.630193 kubelet[2388]: W1009 01:09:18.629182 2388 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.175.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.630193 kubelet[2388]: E1009 01:09:18.629242 2388 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.175.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.223:6443: connect: connection refused Oct 9 01:09:18.645871 kubelet[2388]: E1009 01:09:18.645810 2388 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-2-50096a0261?timeout=10s\": dial tcp 188.245.175.223:6443: connect: connection refused" interval="1.6s" Oct 9 01:09:18.749737 kubelet[2388]: I1009 01:09:18.749696 2388 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:18.750083 kubelet[2388]: E1009 01:09:18.750053 2388 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.175.223:6443/api/v1/nodes\": dial tcp 188.245.175.223:6443: connect: connection refused" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:20.250099 kubelet[2388]: E1009 01:09:20.250059 2388 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116-0-0-2-50096a0261\" not found" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:20.352835 kubelet[2388]: I1009 01:09:20.352617 2388 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:20.360256 kubelet[2388]: I1009 01:09:20.360210 2388 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:20.368131 kubelet[2388]: E1009 01:09:20.368100 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.468935 kubelet[2388]: E1009 01:09:20.468856 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.569797 kubelet[2388]: E1009 01:09:20.569626 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.670273 kubelet[2388]: E1009 01:09:20.670229 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.771115 kubelet[2388]: E1009 01:09:20.771066 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.871969 kubelet[2388]: E1009 01:09:20.871858 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:20.972556 kubelet[2388]: E1009 01:09:20.972509 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:21.073368 kubelet[2388]: E1009 01:09:21.073325 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:21.174096 kubelet[2388]: E1009 01:09:21.173900 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:21.274400 kubelet[2388]: E1009 01:09:21.274322 2388 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-2-50096a0261\" not found" Oct 9 01:09:22.224855 kubelet[2388]: I1009 01:09:22.224817 2388 apiserver.go:52] "Watching apiserver" Oct 9 01:09:22.243343 kubelet[2388]: I1009 01:09:22.243290 2388 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:09:22.482886 systemd[1]: Reloading requested from client PID 2662 ('systemctl') (unit session-7.scope)... Oct 9 01:09:22.482902 systemd[1]: Reloading... Oct 9 01:09:22.583069 zram_generator::config[2705]: No configuration found. Oct 9 01:09:22.686186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:09:22.764223 systemd[1]: Reloading finished in 280 ms. Oct 9 01:09:22.806713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:22.821558 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:09:22.821850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:22.828270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:09:22.945890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:09:22.949976 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:09:23.007083 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:23.007083 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:09:23.007083 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:09:23.007083 kubelet[2753]: I1009 01:09:23.006170 2753 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:09:23.011099 kubelet[2753]: I1009 01:09:23.010871 2753 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:09:23.011099 kubelet[2753]: I1009 01:09:23.010889 2753 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:09:23.011099 kubelet[2753]: I1009 01:09:23.011076 2753 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:09:23.012896 kubelet[2753]: I1009 01:09:23.012862 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:09:23.015347 kubelet[2753]: I1009 01:09:23.014797 2753 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:09:23.026420 kubelet[2753]: I1009 01:09:23.026398 2753 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:09:23.026886 kubelet[2753]: I1009 01:09:23.026784 2753 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:09:23.027052 kubelet[2753]: I1009 01:09:23.027007 2753 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:09:23.027285 kubelet[2753]: I1009 01:09:23.027168 2753 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:09:23.027285 kubelet[2753]: I1009 01:09:23.027183 2753 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:09:23.027285 kubelet[2753]: I1009 01:09:23.027210 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:23.027694 kubelet[2753]: I1009 01:09:23.027541 2753 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:09:23.027694 kubelet[2753]: I1009 01:09:23.027561 2753 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:09:23.027779 kubelet[2753]: I1009 01:09:23.027764 2753 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:09:23.027962 kubelet[2753]: I1009 01:09:23.027882 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:09:23.029976 kubelet[2753]: I1009 01:09:23.029952 2753 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:09:23.030779 kubelet[2753]: I1009 01:09:23.030385 2753 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:09:23.030779 kubelet[2753]: I1009 01:09:23.030747 2753 server.go:1256] "Started kubelet" Oct 9 01:09:23.035045 kubelet[2753]: I1009 01:09:23.034436 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:09:23.045887 kubelet[2753]: E1009 01:09:23.045737 2753 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:09:23.048434 kubelet[2753]: I1009 01:09:23.047503 2753 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:09:23.050580 kubelet[2753]: I1009 01:09:23.049826 2753 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:09:23.052745 kubelet[2753]: I1009 01:09:23.052350 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:09:23.052745 kubelet[2753]: I1009 01:09:23.052506 2753 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:09:23.053709 kubelet[2753]: I1009 01:09:23.047990 2753 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:09:23.053816 kubelet[2753]: I1009 01:09:23.048005 2753 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:09:23.054499 kubelet[2753]: I1009 01:09:23.054485 2753 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:09:23.056182 kubelet[2753]: I1009 01:09:23.055626 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:09:23.057096 kubelet[2753]: I1009 01:09:23.056679 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:09:23.057096 kubelet[2753]: I1009 01:09:23.056708 2753 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:09:23.057096 kubelet[2753]: I1009 01:09:23.056721 2753 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:09:23.057096 kubelet[2753]: E1009 01:09:23.056758 2753 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:09:23.067828 kubelet[2753]: I1009 01:09:23.067109 2753 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:09:23.067828 kubelet[2753]: I1009 01:09:23.067177 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:09:23.069328 kubelet[2753]: I1009 01:09:23.069200 2753 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:09:23.112433 kubelet[2753]: I1009 01:09:23.112399 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:09:23.112433 kubelet[2753]: I1009 01:09:23.112423 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:09:23.112433 kubelet[2753]: I1009 01:09:23.112438 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:09:23.112602 kubelet[2753]: I1009 01:09:23.112562 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:09:23.112602 kubelet[2753]: I1009 01:09:23.112581 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:09:23.112602 kubelet[2753]: I1009 01:09:23.112587 2753 policy_none.go:49] "None policy: Start" Oct 9 01:09:23.114312 kubelet[2753]: I1009 01:09:23.113823 2753 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:09:23.114312 kubelet[2753]: I1009 01:09:23.113844 2753 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:09:23.114312 kubelet[2753]: I1009 01:09:23.113959 2753 state_mem.go:75] "Updated machine memory state" Oct 9 01:09:23.125318 kubelet[2753]: I1009 01:09:23.125286 2753 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:09:23.125519 kubelet[2753]: I1009 01:09:23.125490 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:09:23.151379 kubelet[2753]: I1009 01:09:23.151341 2753 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.157449 kubelet[2753]: I1009 01:09:23.157426 2753 topology_manager.go:215] "Topology Admit Handler" podUID="aad7581e5ea41e87e1279b6e1a5e679e" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.157879 kubelet[2753]: I1009 01:09:23.157602 2753 topology_manager.go:215] "Topology Admit Handler" podUID="d79132f28a65594a69a939efae1f50c7" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.158084 kubelet[2753]: I1009 01:09:23.158070 2753 topology_manager.go:215] "Topology Admit Handler" podUID="8593bef8f9d8e422ddeff978afb61014" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.162665 kubelet[2753]: I1009 01:09:23.162651 2753 kubelet_node_status.go:112] "Node was previously registered" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.162813 kubelet[2753]: I1009 01:09:23.162801 2753 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357270 kubelet[2753]: I1009 01:09:23.357167 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aad7581e5ea41e87e1279b6e1a5e679e-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-2-50096a0261\" (UID: \"aad7581e5ea41e87e1279b6e1a5e679e\") " pod="kube-system/kube-scheduler-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357270 kubelet[2753]: I1009 01:09:23.357210 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357270 kubelet[2753]: I1009 01:09:23.357236 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357270 kubelet[2753]: I1009 01:09:23.357256 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d79132f28a65594a69a939efae1f50c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-2-50096a0261\" (UID: \"d79132f28a65594a69a939efae1f50c7\") " pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357270 kubelet[2753]: I1009 01:09:23.357273 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357457 kubelet[2753]: I1009 01:09:23.357290 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357457 kubelet[2753]: I1009 01:09:23.357306 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357457 kubelet[2753]: I1009 01:09:23.357322 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:23.357457 kubelet[2753]: I1009 01:09:23.357339 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8593bef8f9d8e422ddeff978afb61014-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-2-50096a0261\" (UID: \"8593bef8f9d8e422ddeff978afb61014\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" Oct 9 01:09:24.037320 kubelet[2753]: I1009 01:09:24.037271 2753 apiserver.go:52] "Watching apiserver" Oct 9 01:09:24.055252 kubelet[2753]: I1009 01:09:24.055194 2753 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:09:24.125996 kubelet[2753]: E1009 01:09:24.125768 2753 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4116-0-0-2-50096a0261\" already exists" pod="kube-system/kube-scheduler-ci-4116-0-0-2-50096a0261" Oct 9 01:09:24.192562 kubelet[2753]: I1009 01:09:24.192511 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116-0-0-2-50096a0261" podStartSLOduration=1.192475327 podStartE2EDuration="1.192475327s" podCreationTimestamp="2024-10-09 01:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:24.175976448 +0000 UTC m=+1.220639050" watchObservedRunningTime="2024-10-09 01:09:24.192475327 +0000 UTC m=+1.237137899" Oct 9 01:09:24.206805 kubelet[2753]: I1009 01:09:24.206622 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116-0-0-2-50096a0261" podStartSLOduration=1.206576864 podStartE2EDuration="1.206576864s" podCreationTimestamp="2024-10-09 01:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:24.205814074 +0000 UTC m=+1.250476657" watchObservedRunningTime="2024-10-09 01:09:24.206576864 +0000 UTC m=+1.251239437" Oct 9 01:09:24.206805 kubelet[2753]: I1009 01:09:24.206713 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116-0-0-2-50096a0261" podStartSLOduration=1.2066964470000001 podStartE2EDuration="1.206696447s" podCreationTimestamp="2024-10-09 01:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:24.194259439 +0000 UTC m=+1.238922012" watchObservedRunningTime="2024-10-09 01:09:24.206696447 +0000 UTC m=+1.251359019" Oct 9 01:09:27.429897 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 9 01:09:27.592383 sshd[1804]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:27.596102 systemd[1]: sshd@6-188.245.175.223:22-139.178.68.195:48794.service: Deactivated successfully. Oct 9 01:09:27.598373 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:09:27.598552 systemd[1]: session-7.scope: Consumed 3.575s CPU time, 184.0M memory peak, 0B memory swap peak. Oct 9 01:09:27.600602 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:09:27.601683 systemd-logind[1482]: Removed session 7. Oct 9 01:09:36.490252 kubelet[2753]: I1009 01:09:36.490134 2753 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:09:36.490780 kubelet[2753]: I1009 01:09:36.490638 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:09:36.490824 containerd[1498]: time="2024-10-09T01:09:36.490470933Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:09:37.382352 kubelet[2753]: I1009 01:09:37.382194 2753 topology_manager.go:215] "Topology Admit Handler" podUID="65bb0ad9-0332-417f-af2d-2d0fb79ee06b" podNamespace="kube-system" podName="kube-proxy-2mq68" Oct 9 01:09:37.392321 systemd[1]: Created slice kubepods-besteffort-pod65bb0ad9_0332_417f_af2d_2d0fb79ee06b.slice - libcontainer container kubepods-besteffort-pod65bb0ad9_0332_417f_af2d_2d0fb79ee06b.slice. Oct 9 01:09:37.452824 kubelet[2753]: I1009 01:09:37.452765 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65bb0ad9-0332-417f-af2d-2d0fb79ee06b-kube-proxy\") pod \"kube-proxy-2mq68\" (UID: \"65bb0ad9-0332-417f-af2d-2d0fb79ee06b\") " pod="kube-system/kube-proxy-2mq68" Oct 9 01:09:37.452931 kubelet[2753]: I1009 01:09:37.452874 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65bb0ad9-0332-417f-af2d-2d0fb79ee06b-lib-modules\") pod \"kube-proxy-2mq68\" (UID: \"65bb0ad9-0332-417f-af2d-2d0fb79ee06b\") " pod="kube-system/kube-proxy-2mq68" Oct 9 01:09:37.452931 kubelet[2753]: I1009 01:09:37.452905 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pq9\" (UniqueName: \"kubernetes.io/projected/65bb0ad9-0332-417f-af2d-2d0fb79ee06b-kube-api-access-b7pq9\") pod \"kube-proxy-2mq68\" (UID: \"65bb0ad9-0332-417f-af2d-2d0fb79ee06b\") " pod="kube-system/kube-proxy-2mq68" Oct 9 01:09:37.452931 kubelet[2753]: I1009 01:09:37.452923 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65bb0ad9-0332-417f-af2d-2d0fb79ee06b-xtables-lock\") pod \"kube-proxy-2mq68\" (UID: \"65bb0ad9-0332-417f-af2d-2d0fb79ee06b\") " pod="kube-system/kube-proxy-2mq68" Oct 9 01:09:37.563412 kubelet[2753]: I1009 01:09:37.563359 2753 topology_manager.go:215] "Topology Admit Handler" podUID="e54db7b3-aea2-4728-ae8b-f3bbba238a6a" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-687zm" Oct 9 01:09:37.571146 systemd[1]: Created slice kubepods-besteffort-pode54db7b3_aea2_4728_ae8b_f3bbba238a6a.slice - libcontainer container kubepods-besteffort-pode54db7b3_aea2_4728_ae8b_f3bbba238a6a.slice. Oct 9 01:09:37.653722 kubelet[2753]: I1009 01:09:37.653602 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e54db7b3-aea2-4728-ae8b-f3bbba238a6a-var-lib-calico\") pod \"tigera-operator-5d56685c77-687zm\" (UID: \"e54db7b3-aea2-4728-ae8b-f3bbba238a6a\") " pod="tigera-operator/tigera-operator-5d56685c77-687zm" Oct 9 01:09:37.653722 kubelet[2753]: I1009 01:09:37.653647 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qks\" (UniqueName: \"kubernetes.io/projected/e54db7b3-aea2-4728-ae8b-f3bbba238a6a-kube-api-access-59qks\") pod \"tigera-operator-5d56685c77-687zm\" (UID: \"e54db7b3-aea2-4728-ae8b-f3bbba238a6a\") " pod="tigera-operator/tigera-operator-5d56685c77-687zm" Oct 9 01:09:37.700490 containerd[1498]: time="2024-10-09T01:09:37.700406443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mq68,Uid:65bb0ad9-0332-417f-af2d-2d0fb79ee06b,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:37.721951 containerd[1498]: time="2024-10-09T01:09:37.721743443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:37.721951 containerd[1498]: time="2024-10-09T01:09:37.721801496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:37.721951 containerd[1498]: time="2024-10-09T01:09:37.721814972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:37.721951 containerd[1498]: time="2024-10-09T01:09:37.721901718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:37.748164 systemd[1]: Started cri-containerd-afc16f99fda2da7fb20cc92787340824307f6dca9dc8dbb13646e46d4765d93e.scope - libcontainer container afc16f99fda2da7fb20cc92787340824307f6dca9dc8dbb13646e46d4765d93e. Oct 9 01:09:37.774144 containerd[1498]: time="2024-10-09T01:09:37.774090895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mq68,Uid:65bb0ad9-0332-417f-af2d-2d0fb79ee06b,Namespace:kube-system,Attempt:0,} returns sandbox id \"afc16f99fda2da7fb20cc92787340824307f6dca9dc8dbb13646e46d4765d93e\"" Oct 9 01:09:37.778017 containerd[1498]: time="2024-10-09T01:09:37.777982336Z" level=info msg="CreateContainer within sandbox \"afc16f99fda2da7fb20cc92787340824307f6dca9dc8dbb13646e46d4765d93e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:09:37.789563 containerd[1498]: time="2024-10-09T01:09:37.789534272Z" level=info msg="CreateContainer within sandbox \"afc16f99fda2da7fb20cc92787340824307f6dca9dc8dbb13646e46d4765d93e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5384f9a2fd4c725ab1d71cd4cc3d96668a2badac2216a61ec208c377dd73f65f\"" Oct 9 01:09:37.790049 containerd[1498]: time="2024-10-09T01:09:37.789872811Z" level=info msg="StartContainer for \"5384f9a2fd4c725ab1d71cd4cc3d96668a2badac2216a61ec208c377dd73f65f\"" Oct 9 01:09:37.815159 systemd[1]: Started cri-containerd-5384f9a2fd4c725ab1d71cd4cc3d96668a2badac2216a61ec208c377dd73f65f.scope - libcontainer container 5384f9a2fd4c725ab1d71cd4cc3d96668a2badac2216a61ec208c377dd73f65f. Oct 9 01:09:37.842274 containerd[1498]: time="2024-10-09T01:09:37.842201868Z" level=info msg="StartContainer for \"5384f9a2fd4c725ab1d71cd4cc3d96668a2badac2216a61ec208c377dd73f65f\" returns successfully" Oct 9 01:09:37.876885 containerd[1498]: time="2024-10-09T01:09:37.876793062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-687zm,Uid:e54db7b3-aea2-4728-ae8b-f3bbba238a6a,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:09:37.895533 containerd[1498]: time="2024-10-09T01:09:37.895349255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:37.896038 containerd[1498]: time="2024-10-09T01:09:37.895711420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:37.896038 containerd[1498]: time="2024-10-09T01:09:37.895750634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:37.896639 containerd[1498]: time="2024-10-09T01:09:37.896584415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:37.914369 systemd[1]: Started cri-containerd-c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49.scope - libcontainer container c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49. Oct 9 01:09:37.955576 containerd[1498]: time="2024-10-09T01:09:37.954926114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-687zm,Uid:e54db7b3-aea2-4728-ae8b-f3bbba238a6a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49\"" Oct 9 01:09:37.957490 containerd[1498]: time="2024-10-09T01:09:37.957455149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:09:38.118718 kubelet[2753]: I1009 01:09:38.118673 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2mq68" podStartSLOduration=1.118640458 podStartE2EDuration="1.118640458s" podCreationTimestamp="2024-10-09 01:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:38.118035196 +0000 UTC m=+15.162697768" watchObservedRunningTime="2024-10-09 01:09:38.118640458 +0000 UTC m=+15.163303029" Oct 9 01:09:39.409610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712800727.mount: Deactivated successfully. Oct 9 01:09:39.761553 containerd[1498]: time="2024-10-09T01:09:39.761511243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:39.762450 containerd[1498]: time="2024-10-09T01:09:39.762360983Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136569" Oct 9 01:09:39.763093 containerd[1498]: time="2024-10-09T01:09:39.763049063Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:39.764819 containerd[1498]: time="2024-10-09T01:09:39.764784070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:39.765504 containerd[1498]: time="2024-10-09T01:09:39.765403518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.807911899s" Oct 9 01:09:39.765504 containerd[1498]: time="2024-10-09T01:09:39.765427184Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:09:39.768100 containerd[1498]: time="2024-10-09T01:09:39.768072176Z" level=info msg="CreateContainer within sandbox \"c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:09:39.781566 containerd[1498]: time="2024-10-09T01:09:39.781534623Z" level=info msg="CreateContainer within sandbox \"c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20\"" Oct 9 01:09:39.781840 containerd[1498]: time="2024-10-09T01:09:39.781806504Z" level=info msg="StartContainer for \"49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20\"" Oct 9 01:09:39.811176 systemd[1]: Started cri-containerd-49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20.scope - libcontainer container 49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20. Oct 9 01:09:39.834161 containerd[1498]: time="2024-10-09T01:09:39.834004938Z" level=info msg="StartContainer for \"49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20\" returns successfully" Oct 9 01:09:42.946251 kubelet[2753]: I1009 01:09:42.946197 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-687zm" podStartSLOduration=4.13681544 podStartE2EDuration="5.946097639s" podCreationTimestamp="2024-10-09 01:09:37 +0000 UTC" firstStartedPulling="2024-10-09 01:09:37.956441804 +0000 UTC m=+15.001104375" lastFinishedPulling="2024-10-09 01:09:39.765724002 +0000 UTC m=+16.810386574" observedRunningTime="2024-10-09 01:09:40.125181826 +0000 UTC m=+17.169844419" watchObservedRunningTime="2024-10-09 01:09:42.946097639 +0000 UTC m=+19.990760211" Oct 9 01:09:42.946683 kubelet[2753]: I1009 01:09:42.946504 2753 topology_manager.go:215] "Topology Admit Handler" podUID="7cb6c53e-fbea-48e4-b7b2-3240dc039f9a" podNamespace="calico-system" podName="calico-typha-84d578fb67-kh5b2" Oct 9 01:09:42.960684 systemd[1]: Created slice kubepods-besteffort-pod7cb6c53e_fbea_48e4_b7b2_3240dc039f9a.slice - libcontainer container kubepods-besteffort-pod7cb6c53e_fbea_48e4_b7b2_3240dc039f9a.slice. Oct 9 01:09:42.988002 kubelet[2753]: I1009 01:09:42.987953 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7cb6c53e-fbea-48e4-b7b2-3240dc039f9a-typha-certs\") pod \"calico-typha-84d578fb67-kh5b2\" (UID: \"7cb6c53e-fbea-48e4-b7b2-3240dc039f9a\") " pod="calico-system/calico-typha-84d578fb67-kh5b2" Oct 9 01:09:42.988002 kubelet[2753]: I1009 01:09:42.988010 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fct5n\" (UniqueName: \"kubernetes.io/projected/7cb6c53e-fbea-48e4-b7b2-3240dc039f9a-kube-api-access-fct5n\") pod \"calico-typha-84d578fb67-kh5b2\" (UID: \"7cb6c53e-fbea-48e4-b7b2-3240dc039f9a\") " pod="calico-system/calico-typha-84d578fb67-kh5b2" Oct 9 01:09:42.988189 kubelet[2753]: I1009 01:09:42.988058 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb6c53e-fbea-48e4-b7b2-3240dc039f9a-tigera-ca-bundle\") pod \"calico-typha-84d578fb67-kh5b2\" (UID: \"7cb6c53e-fbea-48e4-b7b2-3240dc039f9a\") " pod="calico-system/calico-typha-84d578fb67-kh5b2" Oct 9 01:09:43.033153 kubelet[2753]: I1009 01:09:43.032572 2753 topology_manager.go:215] "Topology Admit Handler" podUID="8379b78d-0a47-4401-9722-651102258b68" podNamespace="calico-system" podName="calico-node-6tp5q" Oct 9 01:09:43.040786 systemd[1]: Created slice kubepods-besteffort-pod8379b78d_0a47_4401_9722_651102258b68.slice - libcontainer container kubepods-besteffort-pod8379b78d_0a47_4401_9722_651102258b68.slice. Oct 9 01:09:43.088442 kubelet[2753]: I1009 01:09:43.088405 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-var-lib-calico\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089119 kubelet[2753]: I1009 01:09:43.088711 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8379b78d-0a47-4401-9722-651102258b68-node-certs\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089119 kubelet[2753]: I1009 01:09:43.088754 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-flexvol-driver-host\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089119 kubelet[2753]: I1009 01:09:43.088793 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-cni-log-dir\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089119 kubelet[2753]: I1009 01:09:43.088822 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqp5\" (UniqueName: \"kubernetes.io/projected/8379b78d-0a47-4401-9722-651102258b68-kube-api-access-jhqp5\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089119 kubelet[2753]: I1009 01:09:43.088849 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-xtables-lock\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089392 kubelet[2753]: I1009 01:09:43.088879 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-cni-bin-dir\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089392 kubelet[2753]: I1009 01:09:43.088907 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-cni-net-dir\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089392 kubelet[2753]: I1009 01:09:43.088950 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-lib-modules\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089392 kubelet[2753]: I1009 01:09:43.088978 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8379b78d-0a47-4401-9722-651102258b68-tigera-ca-bundle\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089392 kubelet[2753]: I1009 01:09:43.089005 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-var-run-calico\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.089558 kubelet[2753]: I1009 01:09:43.089074 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8379b78d-0a47-4401-9722-651102258b68-policysync\") pod \"calico-node-6tp5q\" (UID: \"8379b78d-0a47-4401-9722-651102258b68\") " pod="calico-system/calico-node-6tp5q" Oct 9 01:09:43.146067 kubelet[2753]: I1009 01:09:43.144874 2753 topology_manager.go:215] "Topology Admit Handler" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" podNamespace="calico-system" podName="csi-node-driver-9gmbj" Oct 9 01:09:43.146067 kubelet[2753]: E1009 01:09:43.145157 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:43.189508 kubelet[2753]: I1009 01:09:43.189481 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36a2e714-58d6-4153-a262-4f8ad2d40b26-varrun\") pod \"csi-node-driver-9gmbj\" (UID: \"36a2e714-58d6-4153-a262-4f8ad2d40b26\") " pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:43.189778 kubelet[2753]: I1009 01:09:43.189766 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a2e714-58d6-4153-a262-4f8ad2d40b26-kubelet-dir\") pod \"csi-node-driver-9gmbj\" (UID: \"36a2e714-58d6-4153-a262-4f8ad2d40b26\") " pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:43.189934 kubelet[2753]: I1009 01:09:43.189920 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kmr\" (UniqueName: \"kubernetes.io/projected/36a2e714-58d6-4153-a262-4f8ad2d40b26-kube-api-access-56kmr\") pod \"csi-node-driver-9gmbj\" (UID: \"36a2e714-58d6-4153-a262-4f8ad2d40b26\") " pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:43.190167 kubelet[2753]: I1009 01:09:43.190153 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36a2e714-58d6-4153-a262-4f8ad2d40b26-socket-dir\") pod \"csi-node-driver-9gmbj\" (UID: \"36a2e714-58d6-4153-a262-4f8ad2d40b26\") " pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:43.190261 kubelet[2753]: I1009 01:09:43.190251 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36a2e714-58d6-4153-a262-4f8ad2d40b26-registration-dir\") pod \"csi-node-driver-9gmbj\" (UID: \"36a2e714-58d6-4153-a262-4f8ad2d40b26\") " pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:43.204446 kubelet[2753]: E1009 01:09:43.204372 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.204446 kubelet[2753]: W1009 01:09:43.205125 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.204446 kubelet[2753]: E1009 01:09:43.205154 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.270691 containerd[1498]: time="2024-10-09T01:09:43.270637710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d578fb67-kh5b2,Uid:7cb6c53e-fbea-48e4-b7b2-3240dc039f9a,Namespace:calico-system,Attempt:0,}" Oct 9 01:09:43.293630 kubelet[2753]: E1009 01:09:43.293551 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.293630 kubelet[2753]: W1009 01:09:43.293570 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.293630 kubelet[2753]: E1009 01:09:43.293596 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.295623 kubelet[2753]: E1009 01:09:43.295339 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.295623 kubelet[2753]: W1009 01:09:43.295364 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.295623 kubelet[2753]: E1009 01:09:43.295380 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.296049 kubelet[2753]: E1009 01:09:43.295873 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.296049 kubelet[2753]: W1009 01:09:43.295883 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.296049 kubelet[2753]: E1009 01:09:43.295931 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.296498 kubelet[2753]: E1009 01:09:43.296298 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.296498 kubelet[2753]: W1009 01:09:43.296310 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.296498 kubelet[2753]: E1009 01:09:43.296390 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.296746 kubelet[2753]: E1009 01:09:43.296636 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.296746 kubelet[2753]: W1009 01:09:43.296648 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.296746 kubelet[2753]: E1009 01:09:43.296673 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.297167 kubelet[2753]: E1009 01:09:43.296990 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.297167 kubelet[2753]: W1009 01:09:43.297011 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.297167 kubelet[2753]: E1009 01:09:43.297041 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.297535 kubelet[2753]: E1009 01:09:43.297405 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.297535 kubelet[2753]: W1009 01:09:43.297415 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.297535 kubelet[2753]: E1009 01:09:43.297453 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.298067 kubelet[2753]: E1009 01:09:43.297835 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.298067 kubelet[2753]: W1009 01:09:43.297844 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.298067 kubelet[2753]: E1009 01:09:43.297898 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.298282 kubelet[2753]: E1009 01:09:43.298191 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.298282 kubelet[2753]: W1009 01:09:43.298201 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.298481 kubelet[2753]: E1009 01:09:43.298383 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.298637 kubelet[2753]: E1009 01:09:43.298556 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.298637 kubelet[2753]: W1009 01:09:43.298565 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.298793 kubelet[2753]: E1009 01:09:43.298717 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.299018 kubelet[2753]: E1009 01:09:43.298921 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.299018 kubelet[2753]: W1009 01:09:43.298930 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.299272 kubelet[2753]: E1009 01:09:43.299139 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.299455 kubelet[2753]: E1009 01:09:43.299369 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.299455 kubelet[2753]: W1009 01:09:43.299380 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.299619 kubelet[2753]: E1009 01:09:43.299531 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.299767 kubelet[2753]: E1009 01:09:43.299721 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.299767 kubelet[2753]: W1009 01:09:43.299754 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.300061 kubelet[2753]: E1009 01:09:43.299929 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.300259 kubelet[2753]: E1009 01:09:43.300169 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.300259 kubelet[2753]: W1009 01:09:43.300179 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.300443 kubelet[2753]: E1009 01:09:43.300350 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.300581 kubelet[2753]: E1009 01:09:43.300525 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.300581 kubelet[2753]: W1009 01:09:43.300534 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.300743 kubelet[2753]: E1009 01:09:43.300664 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.301044 kubelet[2753]: E1009 01:09:43.300956 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.301044 kubelet[2753]: W1009 01:09:43.300965 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.301279 kubelet[2753]: E1009 01:09:43.301127 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.301382 kubelet[2753]: E1009 01:09:43.301371 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.301498 kubelet[2753]: W1009 01:09:43.301442 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.301588 kubelet[2753]: E1009 01:09:43.301534 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.302052 kubelet[2753]: E1009 01:09:43.301903 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.302052 kubelet[2753]: W1009 01:09:43.301913 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.302052 kubelet[2753]: E1009 01:09:43.302007 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.302497 kubelet[2753]: E1009 01:09:43.302286 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.302497 kubelet[2753]: W1009 01:09:43.302296 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.302497 kubelet[2753]: E1009 01:09:43.302335 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.302886 kubelet[2753]: E1009 01:09:43.302674 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.302886 kubelet[2753]: W1009 01:09:43.302723 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.302886 kubelet[2753]: E1009 01:09:43.302768 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.302958 containerd[1498]: time="2024-10-09T01:09:43.301239920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:43.302958 containerd[1498]: time="2024-10-09T01:09:43.301294634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:43.302958 containerd[1498]: time="2024-10-09T01:09:43.301310385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:43.302958 containerd[1498]: time="2024-10-09T01:09:43.301404805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:43.303420 kubelet[2753]: E1009 01:09:43.303122 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.303420 kubelet[2753]: W1009 01:09:43.303132 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.303420 kubelet[2753]: E1009 01:09:43.303211 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.303420 kubelet[2753]: E1009 01:09:43.303359 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.303420 kubelet[2753]: W1009 01:09:43.303366 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.303891 kubelet[2753]: E1009 01:09:43.303832 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.303891 kubelet[2753]: W1009 01:09:43.303842 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.303891 kubelet[2753]: E1009 01:09:43.303852 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.305749 kubelet[2753]: E1009 01:09:43.305323 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.305749 kubelet[2753]: E1009 01:09:43.305518 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.305749 kubelet[2753]: W1009 01:09:43.305525 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.305749 kubelet[2753]: E1009 01:09:43.305538 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.305749 kubelet[2753]: E1009 01:09:43.305713 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.305749 kubelet[2753]: W1009 01:09:43.305720 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.305749 kubelet[2753]: E1009 01:09:43.305731 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.313510 kubelet[2753]: E1009 01:09:43.313489 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:09:43.313620 kubelet[2753]: W1009 01:09:43.313607 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:09:43.313746 kubelet[2753]: E1009 01:09:43.313733 2753 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:09:43.341329 systemd[1]: Started cri-containerd-2bfbea28b1f4079b63391241779627672e349d729188a301cb7e23cf4a652cdb.scope - libcontainer container 2bfbea28b1f4079b63391241779627672e349d729188a301cb7e23cf4a652cdb. Oct 9 01:09:43.346523 containerd[1498]: time="2024-10-09T01:09:43.346237419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6tp5q,Uid:8379b78d-0a47-4401-9722-651102258b68,Namespace:calico-system,Attempt:0,}" Oct 9 01:09:43.383413 containerd[1498]: time="2024-10-09T01:09:43.383307944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:43.383528 containerd[1498]: time="2024-10-09T01:09:43.383441078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:43.383528 containerd[1498]: time="2024-10-09T01:09:43.383470705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:43.384039 containerd[1498]: time="2024-10-09T01:09:43.383677111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:43.412882 systemd[1]: Started cri-containerd-d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7.scope - libcontainer container d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7. Oct 9 01:09:43.461942 containerd[1498]: time="2024-10-09T01:09:43.461744725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6tp5q,Uid:8379b78d-0a47-4401-9722-651102258b68,Namespace:calico-system,Attempt:0,} returns sandbox id \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\"" Oct 9 01:09:43.482951 containerd[1498]: time="2024-10-09T01:09:43.482804185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:09:43.483486 containerd[1498]: time="2024-10-09T01:09:43.483444851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d578fb67-kh5b2,Uid:7cb6c53e-fbea-48e4-b7b2-3240dc039f9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bfbea28b1f4079b63391241779627672e349d729188a301cb7e23cf4a652cdb\"" Oct 9 01:09:44.753330 containerd[1498]: time="2024-10-09T01:09:44.753268403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.754130 containerd[1498]: time="2024-10-09T01:09:44.754096468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:09:44.754923 containerd[1498]: time="2024-10-09T01:09:44.754885348Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.756480 containerd[1498]: time="2024-10-09T01:09:44.756445043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:44.756921 containerd[1498]: time="2024-10-09T01:09:44.756893201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.273946202s" Oct 9 01:09:44.756967 containerd[1498]: time="2024-10-09T01:09:44.756920434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:09:44.757513 containerd[1498]: time="2024-10-09T01:09:44.757490324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:09:44.761892 containerd[1498]: time="2024-10-09T01:09:44.761863964Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:09:44.786072 containerd[1498]: time="2024-10-09T01:09:44.786039213Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71\"" Oct 9 01:09:44.789107 containerd[1498]: time="2024-10-09T01:09:44.789057099Z" level=info msg="StartContainer for \"b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71\"" Oct 9 01:09:44.815122 systemd[1]: run-containerd-runc-k8s.io-b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71-runc.tc8gXk.mount: Deactivated successfully. Oct 9 01:09:44.825151 systemd[1]: Started cri-containerd-b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71.scope - libcontainer container b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71. Oct 9 01:09:44.864400 containerd[1498]: time="2024-10-09T01:09:44.863997185Z" level=info msg="StartContainer for \"b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71\" returns successfully" Oct 9 01:09:44.874134 systemd[1]: cri-containerd-b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71.scope: Deactivated successfully. Oct 9 01:09:44.916766 containerd[1498]: time="2024-10-09T01:09:44.916699030Z" level=info msg="shim disconnected" id=b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71 namespace=k8s.io Oct 9 01:09:44.916766 containerd[1498]: time="2024-10-09T01:09:44.916746711Z" level=warning msg="cleaning up after shim disconnected" id=b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71 namespace=k8s.io Oct 9 01:09:44.916766 containerd[1498]: time="2024-10-09T01:09:44.916754306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:09:45.058763 kubelet[2753]: E1009 01:09:45.058504 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:45.095862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9cdff40309efc3d4e860ef4470bf67d98f7e263a7ae40b768e7c97ec9e8dd71-rootfs.mount: Deactivated successfully. Oct 9 01:09:46.487822 containerd[1498]: time="2024-10-09T01:09:46.487753814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.489069 containerd[1498]: time="2024-10-09T01:09:46.489012882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:09:46.489880 containerd[1498]: time="2024-10-09T01:09:46.489860034Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.492507 containerd[1498]: time="2024-10-09T01:09:46.492482419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:46.493096 containerd[1498]: time="2024-10-09T01:09:46.493070584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 1.735557878s" Oct 9 01:09:46.493143 containerd[1498]: time="2024-10-09T01:09:46.493095792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:09:46.494171 containerd[1498]: time="2024-10-09T01:09:46.494146171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:09:46.507051 containerd[1498]: time="2024-10-09T01:09:46.506196889Z" level=info msg="CreateContainer within sandbox \"2bfbea28b1f4079b63391241779627672e349d729188a301cb7e23cf4a652cdb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:09:46.525727 containerd[1498]: time="2024-10-09T01:09:46.525685435Z" level=info msg="CreateContainer within sandbox \"2bfbea28b1f4079b63391241779627672e349d729188a301cb7e23cf4a652cdb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8e87a6ca2f5462e04470f6ee5fc814bcbe324f2fb4c95acbf4c08c3e9b060bd6\"" Oct 9 01:09:46.526699 containerd[1498]: time="2024-10-09T01:09:46.526156996Z" level=info msg="StartContainer for \"8e87a6ca2f5462e04470f6ee5fc814bcbe324f2fb4c95acbf4c08c3e9b060bd6\"" Oct 9 01:09:46.556884 systemd[1]: Started cri-containerd-8e87a6ca2f5462e04470f6ee5fc814bcbe324f2fb4c95acbf4c08c3e9b060bd6.scope - libcontainer container 8e87a6ca2f5462e04470f6ee5fc814bcbe324f2fb4c95acbf4c08c3e9b060bd6. Oct 9 01:09:46.605230 containerd[1498]: time="2024-10-09T01:09:46.605190590Z" level=info msg="StartContainer for \"8e87a6ca2f5462e04470f6ee5fc814bcbe324f2fb4c95acbf4c08c3e9b060bd6\" returns successfully" Oct 9 01:09:47.057941 kubelet[2753]: E1009 01:09:47.057584 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:47.141297 kubelet[2753]: I1009 01:09:47.141106 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-84d578fb67-kh5b2" podStartSLOduration=2.133163791 podStartE2EDuration="5.141062637s" podCreationTimestamp="2024-10-09 01:09:42 +0000 UTC" firstStartedPulling="2024-10-09 01:09:43.485699708 +0000 UTC m=+20.530362280" lastFinishedPulling="2024-10-09 01:09:46.493598554 +0000 UTC m=+23.538261126" observedRunningTime="2024-10-09 01:09:47.140927397 +0000 UTC m=+24.185589970" watchObservedRunningTime="2024-10-09 01:09:47.141062637 +0000 UTC m=+24.185725219" Oct 9 01:09:48.134298 kubelet[2753]: I1009 01:09:48.134267 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:09:49.058553 kubelet[2753]: E1009 01:09:49.057571 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:49.288040 containerd[1498]: time="2024-10-09T01:09:49.287962642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:49.289106 containerd[1498]: time="2024-10-09T01:09:49.289058186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:09:49.290010 containerd[1498]: time="2024-10-09T01:09:49.289972814Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:49.291800 containerd[1498]: time="2024-10-09T01:09:49.291763406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:49.292393 containerd[1498]: time="2024-10-09T01:09:49.292348733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 2.798149021s" Oct 9 01:09:49.292393 containerd[1498]: time="2024-10-09T01:09:49.292391316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:09:49.294714 containerd[1498]: time="2024-10-09T01:09:49.294685270Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:09:49.338972 containerd[1498]: time="2024-10-09T01:09:49.338823500Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914\"" Oct 9 01:09:49.339887 containerd[1498]: time="2024-10-09T01:09:49.339758017Z" level=info msg="StartContainer for \"3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914\"" Oct 9 01:09:49.409248 systemd[1]: run-containerd-runc-k8s.io-3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914-runc.TkQIjF.mount: Deactivated successfully. Oct 9 01:09:49.418161 systemd[1]: Started cri-containerd-3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914.scope - libcontainer container 3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914. Oct 9 01:09:49.456593 containerd[1498]: time="2024-10-09T01:09:49.456545747Z" level=info msg="StartContainer for \"3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914\" returns successfully" Oct 9 01:09:49.841043 systemd[1]: cri-containerd-3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914.scope: Deactivated successfully. Oct 9 01:09:49.868798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914-rootfs.mount: Deactivated successfully. Oct 9 01:09:49.897757 kubelet[2753]: I1009 01:09:49.897711 2753 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:09:49.903508 containerd[1498]: time="2024-10-09T01:09:49.903452218Z" level=info msg="shim disconnected" id=3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914 namespace=k8s.io Oct 9 01:09:49.903508 containerd[1498]: time="2024-10-09T01:09:49.903504667Z" level=warning msg="cleaning up after shim disconnected" id=3bd06fab970c6da34db170c8adbc42a8c4316e756c93a3657e814e45cc4a5914 namespace=k8s.io Oct 9 01:09:49.903998 containerd[1498]: time="2024-10-09T01:09:49.903512823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:09:49.925256 kubelet[2753]: I1009 01:09:49.923700 2753 topology_manager.go:215] "Topology Admit Handler" podUID="8fd8d9a3-75d9-4254-a06e-cd07f555d058" podNamespace="kube-system" podName="coredns-76f75df574-sm4xg" Oct 9 01:09:49.925365 containerd[1498]: time="2024-10-09T01:09:49.924320658Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:09:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:09:49.929211 kubelet[2753]: I1009 01:09:49.929098 2753 topology_manager.go:215] "Topology Admit Handler" podUID="f4b14037-32ba-4566-b2be-c2127ceb8d7c" podNamespace="kube-system" podName="coredns-76f75df574-dvlf5" Oct 9 01:09:49.930889 kubelet[2753]: I1009 01:09:49.930873 2753 topology_manager.go:215] "Topology Admit Handler" podUID="f0511c8d-9ec1-40e7-adb4-51ba165ea4f7" podNamespace="calico-system" podName="calico-kube-controllers-587766768d-6csts" Oct 9 01:09:49.932335 kubelet[2753]: W1009 01:09:49.932307 2753 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4116-0-0-2-50096a0261" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4116-0-0-2-50096a0261' and this object Oct 9 01:09:49.932730 kubelet[2753]: E1009 01:09:49.932721 2753 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4116-0-0-2-50096a0261" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4116-0-0-2-50096a0261' and this object Oct 9 01:09:49.936604 systemd[1]: Created slice kubepods-burstable-pod8fd8d9a3_75d9_4254_a06e_cd07f555d058.slice - libcontainer container kubepods-burstable-pod8fd8d9a3_75d9_4254_a06e_cd07f555d058.slice. Oct 9 01:09:49.945199 systemd[1]: Created slice kubepods-burstable-podf4b14037_32ba_4566_b2be_c2127ceb8d7c.slice - libcontainer container kubepods-burstable-podf4b14037_32ba_4566_b2be_c2127ceb8d7c.slice. Oct 9 01:09:49.954195 systemd[1]: Created slice kubepods-besteffort-podf0511c8d_9ec1_40e7_adb4_51ba165ea4f7.slice - libcontainer container kubepods-besteffort-podf0511c8d_9ec1_40e7_adb4_51ba165ea4f7.slice. Oct 9 01:09:50.040325 kubelet[2753]: I1009 01:09:50.040208 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzdr\" (UniqueName: \"kubernetes.io/projected/f0511c8d-9ec1-40e7-adb4-51ba165ea4f7-kube-api-access-wqzdr\") pod \"calico-kube-controllers-587766768d-6csts\" (UID: \"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7\") " pod="calico-system/calico-kube-controllers-587766768d-6csts" Oct 9 01:09:50.040325 kubelet[2753]: I1009 01:09:50.040269 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fd8d9a3-75d9-4254-a06e-cd07f555d058-config-volume\") pod \"coredns-76f75df574-sm4xg\" (UID: \"8fd8d9a3-75d9-4254-a06e-cd07f555d058\") " pod="kube-system/coredns-76f75df574-sm4xg" Oct 9 01:09:50.040516 kubelet[2753]: I1009 01:09:50.040460 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zph6s\" (UniqueName: \"kubernetes.io/projected/8fd8d9a3-75d9-4254-a06e-cd07f555d058-kube-api-access-zph6s\") pod \"coredns-76f75df574-sm4xg\" (UID: \"8fd8d9a3-75d9-4254-a06e-cd07f555d058\") " pod="kube-system/coredns-76f75df574-sm4xg" Oct 9 01:09:50.040552 kubelet[2753]: I1009 01:09:50.040517 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4b14037-32ba-4566-b2be-c2127ceb8d7c-config-volume\") pod \"coredns-76f75df574-dvlf5\" (UID: \"f4b14037-32ba-4566-b2be-c2127ceb8d7c\") " pod="kube-system/coredns-76f75df574-dvlf5" Oct 9 01:09:50.040552 kubelet[2753]: I1009 01:09:50.040548 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzffb\" (UniqueName: \"kubernetes.io/projected/f4b14037-32ba-4566-b2be-c2127ceb8d7c-kube-api-access-pzffb\") pod \"coredns-76f75df574-dvlf5\" (UID: \"f4b14037-32ba-4566-b2be-c2127ceb8d7c\") " pod="kube-system/coredns-76f75df574-dvlf5" Oct 9 01:09:50.040597 kubelet[2753]: I1009 01:09:50.040577 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0511c8d-9ec1-40e7-adb4-51ba165ea4f7-tigera-ca-bundle\") pod \"calico-kube-controllers-587766768d-6csts\" (UID: \"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7\") " pod="calico-system/calico-kube-controllers-587766768d-6csts" Oct 9 01:09:50.149610 containerd[1498]: time="2024-10-09T01:09:50.149509442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:09:50.259998 containerd[1498]: time="2024-10-09T01:09:50.259843989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587766768d-6csts,Uid:f0511c8d-9ec1-40e7-adb4-51ba165ea4f7,Namespace:calico-system,Attempt:0,}" Oct 9 01:09:50.429860 containerd[1498]: time="2024-10-09T01:09:50.429689076Z" level=error msg="Failed to destroy network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:50.433057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636-shm.mount: Deactivated successfully. Oct 9 01:09:50.435076 containerd[1498]: time="2024-10-09T01:09:50.434832033Z" level=error msg="encountered an error cleaning up failed sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:50.435076 containerd[1498]: time="2024-10-09T01:09:50.434888160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587766768d-6csts,Uid:f0511c8d-9ec1-40e7-adb4-51ba165ea4f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:50.435838 kubelet[2753]: E1009 01:09:50.435566 2753 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:50.435838 kubelet[2753]: E1009 01:09:50.435806 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-587766768d-6csts" Oct 9 01:09:50.435838 kubelet[2753]: E1009 01:09:50.435825 2753 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-587766768d-6csts" Oct 9 01:09:50.436823 kubelet[2753]: E1009 01:09:50.435879 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-587766768d-6csts_calico-system(f0511c8d-9ec1-40e7-adb4-51ba165ea4f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-587766768d-6csts_calico-system(f0511c8d-9ec1-40e7-adb4-51ba165ea4f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-587766768d-6csts" podUID="f0511c8d-9ec1-40e7-adb4-51ba165ea4f7" Oct 9 01:09:51.064129 systemd[1]: Created slice kubepods-besteffort-pod36a2e714_58d6_4153_a262_4f8ad2d40b26.slice - libcontainer container kubepods-besteffort-pod36a2e714_58d6_4153_a262_4f8ad2d40b26.slice. Oct 9 01:09:51.070811 containerd[1498]: time="2024-10-09T01:09:51.070066420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gmbj,Uid:36a2e714-58d6-4153-a262-4f8ad2d40b26,Namespace:calico-system,Attempt:0,}" Oct 9 01:09:51.136763 containerd[1498]: time="2024-10-09T01:09:51.136700362Z" level=error msg="Failed to destroy network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.137329 containerd[1498]: time="2024-10-09T01:09:51.137119864Z" level=error msg="encountered an error cleaning up failed sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.137329 containerd[1498]: time="2024-10-09T01:09:51.137168607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gmbj,Uid:36a2e714-58d6-4153-a262-4f8ad2d40b26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.139485 kubelet[2753]: E1009 01:09:51.139145 2753 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.139485 kubelet[2753]: E1009 01:09:51.139196 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:51.139485 kubelet[2753]: E1009 01:09:51.139215 2753 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gmbj" Oct 9 01:09:51.139322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e-shm.mount: Deactivated successfully. Oct 9 01:09:51.140480 kubelet[2753]: E1009 01:09:51.139268 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gmbj_calico-system(36a2e714-58d6-4153-a262-4f8ad2d40b26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gmbj_calico-system(36a2e714-58d6-4153-a262-4f8ad2d40b26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:51.141503 kubelet[2753]: E1009 01:09:51.141415 2753 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 01:09:51.141503 kubelet[2753]: E1009 01:09:51.141485 2753 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b14037-32ba-4566-b2be-c2127ceb8d7c-config-volume podName:f4b14037-32ba-4566-b2be-c2127ceb8d7c nodeName:}" failed. No retries permitted until 2024-10-09 01:09:51.641467809 +0000 UTC m=+28.686130381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f4b14037-32ba-4566-b2be-c2127ceb8d7c-config-volume") pod "coredns-76f75df574-dvlf5" (UID: "f4b14037-32ba-4566-b2be-c2127ceb8d7c") : failed to sync configmap cache: timed out waiting for the condition Oct 9 01:09:51.142746 kubelet[2753]: E1009 01:09:51.142713 2753 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 01:09:51.142883 kubelet[2753]: E1009 01:09:51.142762 2753 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8fd8d9a3-75d9-4254-a06e-cd07f555d058-config-volume podName:8fd8d9a3-75d9-4254-a06e-cd07f555d058 nodeName:}" failed. No retries permitted until 2024-10-09 01:09:51.642749929 +0000 UTC m=+28.687412500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8fd8d9a3-75d9-4254-a06e-cd07f555d058-config-volume") pod "coredns-76f75df574-sm4xg" (UID: "8fd8d9a3-75d9-4254-a06e-cd07f555d058") : failed to sync configmap cache: timed out waiting for the condition Oct 9 01:09:51.150168 kubelet[2753]: I1009 01:09:51.150112 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:09:51.150810 containerd[1498]: time="2024-10-09T01:09:51.150764678Z" level=info msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" Oct 9 01:09:51.152473 containerd[1498]: time="2024-10-09T01:09:51.152421243Z" level=info msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" Oct 9 01:09:51.153142 kubelet[2753]: I1009 01:09:51.151974 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:09:51.157158 containerd[1498]: time="2024-10-09T01:09:51.156876041Z" level=info msg="Ensure that sandbox 006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e in task-service has been cleanup successfully" Oct 9 01:09:51.157835 containerd[1498]: time="2024-10-09T01:09:51.157722188Z" level=info msg="Ensure that sandbox 3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636 in task-service has been cleanup successfully" Oct 9 01:09:51.192516 containerd[1498]: time="2024-10-09T01:09:51.192429695Z" level=error msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" failed" error="failed to destroy network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.193507 kubelet[2753]: E1009 01:09:51.193192 2753 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:09:51.193507 kubelet[2753]: E1009 01:09:51.193282 2753 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e"} Oct 9 01:09:51.193507 kubelet[2753]: E1009 01:09:51.193457 2753 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36a2e714-58d6-4153-a262-4f8ad2d40b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:09:51.193680 containerd[1498]: time="2024-10-09T01:09:51.193590974Z" level=error msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" failed" error="failed to destroy network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.194092 kubelet[2753]: E1009 01:09:51.193772 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36a2e714-58d6-4153-a262-4f8ad2d40b26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gmbj" podUID="36a2e714-58d6-4153-a262-4f8ad2d40b26" Oct 9 01:09:51.194092 kubelet[2753]: E1009 01:09:51.193841 2753 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:09:51.194092 kubelet[2753]: E1009 01:09:51.193893 2753 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636"} Oct 9 01:09:51.194092 kubelet[2753]: E1009 01:09:51.193946 2753 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:09:51.194280 kubelet[2753]: E1009 01:09:51.193989 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-587766768d-6csts" podUID="f0511c8d-9ec1-40e7-adb4-51ba165ea4f7" Oct 9 01:09:51.741371 containerd[1498]: time="2024-10-09T01:09:51.741165500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sm4xg,Uid:8fd8d9a3-75d9-4254-a06e-cd07f555d058,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:51.751565 containerd[1498]: time="2024-10-09T01:09:51.751309306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dvlf5,Uid:f4b14037-32ba-4566-b2be-c2127ceb8d7c,Namespace:kube-system,Attempt:0,}" Oct 9 01:09:51.855071 containerd[1498]: time="2024-10-09T01:09:51.854748454Z" level=error msg="Failed to destroy network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.856253 containerd[1498]: time="2024-10-09T01:09:51.855875586Z" level=error msg="encountered an error cleaning up failed sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.856253 containerd[1498]: time="2024-10-09T01:09:51.855921304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sm4xg,Uid:8fd8d9a3-75d9-4254-a06e-cd07f555d058,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.857706 kubelet[2753]: E1009 01:09:51.857387 2753 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.857706 kubelet[2753]: E1009 01:09:51.857440 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sm4xg" Oct 9 01:09:51.857706 kubelet[2753]: E1009 01:09:51.857459 2753 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sm4xg" Oct 9 01:09:51.859082 kubelet[2753]: E1009 01:09:51.857514 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sm4xg_kube-system(8fd8d9a3-75d9-4254-a06e-cd07f555d058)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sm4xg_kube-system(8fd8d9a3-75d9-4254-a06e-cd07f555d058)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sm4xg" podUID="8fd8d9a3-75d9-4254-a06e-cd07f555d058" Oct 9 01:09:51.859920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7-shm.mount: Deactivated successfully. Oct 9 01:09:51.890570 containerd[1498]: time="2024-10-09T01:09:51.890527568Z" level=error msg="Failed to destroy network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.891614 containerd[1498]: time="2024-10-09T01:09:51.891003397Z" level=error msg="encountered an error cleaning up failed sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.891614 containerd[1498]: time="2024-10-09T01:09:51.891067801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dvlf5,Uid:f4b14037-32ba-4566-b2be-c2127ceb8d7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.892185 kubelet[2753]: E1009 01:09:51.891305 2753 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:51.892185 kubelet[2753]: E1009 01:09:51.891347 2753 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dvlf5" Oct 9 01:09:51.892185 kubelet[2753]: E1009 01:09:51.891365 2753 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dvlf5" Oct 9 01:09:51.892269 kubelet[2753]: E1009 01:09:51.891415 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dvlf5_kube-system(f4b14037-32ba-4566-b2be-c2127ceb8d7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dvlf5_kube-system(f4b14037-32ba-4566-b2be-c2127ceb8d7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dvlf5" podUID="f4b14037-32ba-4566-b2be-c2127ceb8d7c" Oct 9 01:09:52.158038 kubelet[2753]: I1009 01:09:52.157814 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:09:52.159870 containerd[1498]: time="2024-10-09T01:09:52.159159310Z" level=info msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" Oct 9 01:09:52.159870 containerd[1498]: time="2024-10-09T01:09:52.159352297Z" level=info msg="Ensure that sandbox ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7 in task-service has been cleanup successfully" Oct 9 01:09:52.162883 kubelet[2753]: I1009 01:09:52.162815 2753 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:09:52.164342 containerd[1498]: time="2024-10-09T01:09:52.164186840Z" level=info msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" Oct 9 01:09:52.165058 containerd[1498]: time="2024-10-09T01:09:52.164837202Z" level=info msg="Ensure that sandbox 289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15 in task-service has been cleanup successfully" Oct 9 01:09:52.231200 containerd[1498]: time="2024-10-09T01:09:52.231137090Z" level=error msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" failed" error="failed to destroy network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:52.232162 kubelet[2753]: E1009 01:09:52.231994 2753 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:09:52.232342 kubelet[2753]: E1009 01:09:52.232255 2753 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7"} Oct 9 01:09:52.232342 kubelet[2753]: E1009 01:09:52.232321 2753 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fd8d9a3-75d9-4254-a06e-cd07f555d058\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:09:52.232456 kubelet[2753]: E1009 01:09:52.232361 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fd8d9a3-75d9-4254-a06e-cd07f555d058\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sm4xg" podUID="8fd8d9a3-75d9-4254-a06e-cd07f555d058" Oct 9 01:09:52.262610 containerd[1498]: time="2024-10-09T01:09:52.262344991Z" level=error msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" failed" error="failed to destroy network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:09:52.263218 kubelet[2753]: E1009 01:09:52.263188 2753 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:09:52.263296 kubelet[2753]: E1009 01:09:52.263233 2753 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15"} Oct 9 01:09:52.263296 kubelet[2753]: E1009 01:09:52.263266 2753 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4b14037-32ba-4566-b2be-c2127ceb8d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:09:52.263296 kubelet[2753]: E1009 01:09:52.263293 2753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4b14037-32ba-4566-b2be-c2127ceb8d7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dvlf5" podUID="f4b14037-32ba-4566-b2be-c2127ceb8d7c" Oct 9 01:09:52.686078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15-shm.mount: Deactivated successfully. Oct 9 01:09:54.353686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095004591.mount: Deactivated successfully. Oct 9 01:09:54.412069 containerd[1498]: time="2024-10-09T01:09:54.411284940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:09:54.412069 containerd[1498]: time="2024-10-09T01:09:54.409065694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:54.412994 containerd[1498]: time="2024-10-09T01:09:54.412972011Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:54.413622 containerd[1498]: time="2024-10-09T01:09:54.413593979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.263802987s" Oct 9 01:09:54.413672 containerd[1498]: time="2024-10-09T01:09:54.413622624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:09:54.414382 containerd[1498]: time="2024-10-09T01:09:54.414357526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:54.472822 containerd[1498]: time="2024-10-09T01:09:54.472780057Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:09:54.513753 containerd[1498]: time="2024-10-09T01:09:54.513695095Z" level=info msg="CreateContainer within sandbox \"d20ef7227fa2b52d55575fdb410aa5c593b0e77f8c627ad056cf91d644472ec7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66\"" Oct 9 01:09:54.514418 containerd[1498]: time="2024-10-09T01:09:54.514390934Z" level=info msg="StartContainer for \"e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66\"" Oct 9 01:09:54.567189 systemd[1]: Started cri-containerd-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66.scope - libcontainer container e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66. Oct 9 01:09:54.603710 containerd[1498]: time="2024-10-09T01:09:54.603648159Z" level=info msg="StartContainer for \"e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66\" returns successfully" Oct 9 01:09:54.679501 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:09:54.681245 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:09:56.189105 kubelet[2753]: I1009 01:09:56.188628 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:10:02.058686 containerd[1498]: time="2024-10-09T01:10:02.058494193Z" level=info msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" Oct 9 01:10:02.128229 kubelet[2753]: I1009 01:10:02.128055 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-6tp5q" podStartSLOduration=8.187625289 podStartE2EDuration="19.120364152s" podCreationTimestamp="2024-10-09 01:09:43 +0000 UTC" firstStartedPulling="2024-10-09 01:09:43.481205464 +0000 UTC m=+20.525868036" lastFinishedPulling="2024-10-09 01:09:54.413944287 +0000 UTC m=+31.458606899" observedRunningTime="2024-10-09 01:09:55.211652523 +0000 UTC m=+32.256315096" watchObservedRunningTime="2024-10-09 01:10:02.120364152 +0000 UTC m=+39.165026725" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.119 [INFO][3913] k8s.go 608: Cleaning up netns ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.120 [INFO][3913] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" iface="eth0" netns="/var/run/netns/cni-db4ecd6c-bccd-0057-5405-ddeee8bc8473" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.121 [INFO][3913] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" iface="eth0" netns="/var/run/netns/cni-db4ecd6c-bccd-0057-5405-ddeee8bc8473" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.121 [INFO][3913] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" iface="eth0" netns="/var/run/netns/cni-db4ecd6c-bccd-0057-5405-ddeee8bc8473" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.121 [INFO][3913] k8s.go 615: Releasing IP address(es) ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.121 [INFO][3913] utils.go 188: Calico CNI releasing IP address ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.245 [INFO][3919] ipam_plugin.go 417: Releasing address using handleID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.245 [INFO][3919] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.246 [INFO][3919] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.254 [WARNING][3919] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.254 [INFO][3919] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.256 [INFO][3919] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:02.260293 containerd[1498]: 2024-10-09 01:10:02.257 [INFO][3913] k8s.go 621: Teardown processing complete. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:02.263452 containerd[1498]: time="2024-10-09T01:10:02.262097135Z" level=info msg="TearDown network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" successfully" Oct 9 01:10:02.263452 containerd[1498]: time="2024-10-09T01:10:02.262120910Z" level=info msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" returns successfully" Oct 9 01:10:02.263711 systemd[1]: run-netns-cni\x2ddb4ecd6c\x2dbccd\x2d0057\x2d5405\x2dddeee8bc8473.mount: Deactivated successfully. Oct 9 01:10:02.297849 containerd[1498]: time="2024-10-09T01:10:02.297797913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gmbj,Uid:36a2e714-58d6-4153-a262-4f8ad2d40b26,Namespace:calico-system,Attempt:1,}" Oct 9 01:10:02.453332 systemd-networkd[1389]: cali0480d91811b: Link UP Oct 9 01:10:02.453520 systemd-networkd[1389]: cali0480d91811b: Gained carrier Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.334 [INFO][3927] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.344 [INFO][3927] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0 csi-node-driver- calico-system 36a2e714-58d6-4153-a262-4f8ad2d40b26 689 0 2024-10-09 01:09:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 csi-node-driver-9gmbj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0480d91811b [] []}} ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.344 [INFO][3927] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.374 [INFO][3938] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" HandleID="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.383 [INFO][3938] ipam_plugin.go 270: Auto assigning IP ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" HandleID="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003182f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-2-50096a0261", "pod":"csi-node-driver-9gmbj", "timestamp":"2024-10-09 01:10:02.374715572 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.383 [INFO][3938] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.383 [INFO][3938] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.383 [INFO][3938] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.385 [INFO][3938] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.391 [INFO][3938] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.396 [INFO][3938] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.399 [INFO][3938] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.404 [INFO][3938] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.404 [INFO][3938] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.406 [INFO][3938] ipam.go 1685: Creating new handle: k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.416 [INFO][3938] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.431 [INFO][3938] ipam.go 1216: Successfully claimed IPs: [192.168.44.1/26] block=192.168.44.0/26 handle="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.431 [INFO][3938] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.1/26] handle="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.431 [INFO][3938] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:02.493184 containerd[1498]: 2024-10-09 01:10:02.431 [INFO][3938] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.1/26] IPv6=[] ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" HandleID="k8s-pod-network.78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.437 [INFO][3927] k8s.go 386: Populated endpoint ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a2e714-58d6-4153-a262-4f8ad2d40b26", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"csi-node-driver-9gmbj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0480d91811b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.438 [INFO][3927] k8s.go 387: Calico CNI using IPs: [192.168.44.1/32] ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.438 [INFO][3927] dataplane_linux.go 68: Setting the host side veth name to cali0480d91811b ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.454 [INFO][3927] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.456 [INFO][3927] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a2e714-58d6-4153-a262-4f8ad2d40b26", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d", Pod:"csi-node-driver-9gmbj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0480d91811b", MAC:"1a:3a:29:54:8f:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:02.493959 containerd[1498]: 2024-10-09 01:10:02.490 [INFO][3927] k8s.go 500: Wrote updated endpoint to datastore ContainerID="78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d" Namespace="calico-system" Pod="csi-node-driver-9gmbj" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:02.565319 containerd[1498]: time="2024-10-09T01:10:02.561599134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:02.565319 containerd[1498]: time="2024-10-09T01:10:02.561661581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:02.565319 containerd[1498]: time="2024-10-09T01:10:02.561674495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:02.565319 containerd[1498]: time="2024-10-09T01:10:02.562068846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:02.617437 systemd[1]: Started cri-containerd-78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d.scope - libcontainer container 78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d. Oct 9 01:10:02.671221 containerd[1498]: time="2024-10-09T01:10:02.671175668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gmbj,Uid:36a2e714-58d6-4153-a262-4f8ad2d40b26,Namespace:calico-system,Attempt:1,} returns sandbox id \"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d\"" Oct 9 01:10:02.692249 containerd[1498]: time="2024-10-09T01:10:02.692214992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:10:04.361161 systemd-networkd[1389]: cali0480d91811b: Gained IPv6LL Oct 9 01:10:04.508705 containerd[1498]: time="2024-10-09T01:10:04.508643561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:04.509711 containerd[1498]: time="2024-10-09T01:10:04.509609456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:10:04.510452 containerd[1498]: time="2024-10-09T01:10:04.510406807Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:04.512198 containerd[1498]: time="2024-10-09T01:10:04.512160516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:04.513047 containerd[1498]: time="2024-10-09T01:10:04.512576919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.820171242s" Oct 9 01:10:04.513047 containerd[1498]: time="2024-10-09T01:10:04.512602486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:10:04.514552 containerd[1498]: time="2024-10-09T01:10:04.514257221Z" level=info msg="CreateContainer within sandbox \"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:10:04.547405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396000859.mount: Deactivated successfully. Oct 9 01:10:04.548667 containerd[1498]: time="2024-10-09T01:10:04.548624734Z" level=info msg="CreateContainer within sandbox \"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"559ce8bc1bf9a0f743b7f34022a7b187d0ff2bf2e87fe84fc9693059fbbc6785\"" Oct 9 01:10:04.550913 containerd[1498]: time="2024-10-09T01:10:04.550299606Z" level=info msg="StartContainer for \"559ce8bc1bf9a0f743b7f34022a7b187d0ff2bf2e87fe84fc9693059fbbc6785\"" Oct 9 01:10:04.590298 systemd[1]: Started cri-containerd-559ce8bc1bf9a0f743b7f34022a7b187d0ff2bf2e87fe84fc9693059fbbc6785.scope - libcontainer container 559ce8bc1bf9a0f743b7f34022a7b187d0ff2bf2e87fe84fc9693059fbbc6785. Oct 9 01:10:04.622731 containerd[1498]: time="2024-10-09T01:10:04.622468485Z" level=info msg="StartContainer for \"559ce8bc1bf9a0f743b7f34022a7b187d0ff2bf2e87fe84fc9693059fbbc6785\" returns successfully" Oct 9 01:10:04.624305 containerd[1498]: time="2024-10-09T01:10:04.624281995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:10:04.962253 kubelet[2753]: I1009 01:10:04.962200 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:10:05.061983 containerd[1498]: time="2024-10-09T01:10:05.060585262Z" level=info msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.108 [INFO][4136] k8s.go 608: Cleaning up netns ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.108 [INFO][4136] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" iface="eth0" netns="/var/run/netns/cni-0e997602-29fa-b30a-ac75-2f4179ee2cb1" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.108 [INFO][4136] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" iface="eth0" netns="/var/run/netns/cni-0e997602-29fa-b30a-ac75-2f4179ee2cb1" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.109 [INFO][4136] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" iface="eth0" netns="/var/run/netns/cni-0e997602-29fa-b30a-ac75-2f4179ee2cb1" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.109 [INFO][4136] k8s.go 615: Releasing IP address(es) ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.109 [INFO][4136] utils.go 188: Calico CNI releasing IP address ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.138 [INFO][4147] ipam_plugin.go 417: Releasing address using handleID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.138 [INFO][4147] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.138 [INFO][4147] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.144 [WARNING][4147] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.144 [INFO][4147] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.145 [INFO][4147] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:05.150888 containerd[1498]: 2024-10-09 01:10:05.148 [INFO][4136] k8s.go 621: Teardown processing complete. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:05.151353 containerd[1498]: time="2024-10-09T01:10:05.151087397Z" level=info msg="TearDown network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" successfully" Oct 9 01:10:05.151353 containerd[1498]: time="2024-10-09T01:10:05.151109969Z" level=info msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" returns successfully" Oct 9 01:10:05.152133 containerd[1498]: time="2024-10-09T01:10:05.151625016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sm4xg,Uid:8fd8d9a3-75d9-4254-a06e-cd07f555d058,Namespace:kube-system,Attempt:1,}" Oct 9 01:10:05.264161 systemd-networkd[1389]: cali65453058549: Link UP Oct 9 01:10:05.264432 systemd-networkd[1389]: cali65453058549: Gained carrier Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.183 [INFO][4175] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.193 [INFO][4175] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0 coredns-76f75df574- kube-system 8fd8d9a3-75d9-4254-a06e-cd07f555d058 705 0 2024-10-09 01:09:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 coredns-76f75df574-sm4xg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65453058549 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.193 [INFO][4175] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.219 [INFO][4187] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" HandleID="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.226 [INFO][4187] ipam_plugin.go 270: Auto assigning IP ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" HandleID="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003183f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-2-50096a0261", "pod":"coredns-76f75df574-sm4xg", "timestamp":"2024-10-09 01:10:05.219420394 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.226 [INFO][4187] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.226 [INFO][4187] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.226 [INFO][4187] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.227 [INFO][4187] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.231 [INFO][4187] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.235 [INFO][4187] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.236 [INFO][4187] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.239 [INFO][4187] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.239 [INFO][4187] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.242 [INFO][4187] ipam.go 1685: Creating new handle: k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.248 [INFO][4187] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.253 [INFO][4187] ipam.go 1216: Successfully claimed IPs: [192.168.44.2/26] block=192.168.44.0/26 handle="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.253 [INFO][4187] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.2/26] handle="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.253 [INFO][4187] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:05.274097 containerd[1498]: 2024-10-09 01:10:05.253 [INFO][4187] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.2/26] IPv6=[] ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" HandleID="k8s-pod-network.47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.256 [INFO][4175] k8s.go 386: Populated endpoint ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fd8d9a3-75d9-4254-a06e-cd07f555d058", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"coredns-76f75df574-sm4xg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65453058549", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.257 [INFO][4175] k8s.go 387: Calico CNI using IPs: [192.168.44.2/32] ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.257 [INFO][4175] dataplane_linux.go 68: Setting the host side veth name to cali65453058549 ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.261 [INFO][4175] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.261 [INFO][4175] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fd8d9a3-75d9-4254-a06e-cd07f555d058", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd", Pod:"coredns-76f75df574-sm4xg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65453058549", MAC:"9a:13:6f:38:f8:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:05.276997 containerd[1498]: 2024-10-09 01:10:05.269 [INFO][4175] k8s.go 500: Wrote updated endpoint to datastore ContainerID="47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd" Namespace="kube-system" Pod="coredns-76f75df574-sm4xg" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:05.294567 containerd[1498]: time="2024-10-09T01:10:05.294254519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:05.294567 containerd[1498]: time="2024-10-09T01:10:05.294300975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:05.294567 containerd[1498]: time="2024-10-09T01:10:05.294365344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:05.294567 containerd[1498]: time="2024-10-09T01:10:05.294452367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:05.315269 systemd[1]: Started cri-containerd-47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd.scope - libcontainer container 47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd. Oct 9 01:10:05.360927 containerd[1498]: time="2024-10-09T01:10:05.360881182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sm4xg,Uid:8fd8d9a3-75d9-4254-a06e-cd07f555d058,Namespace:kube-system,Attempt:1,} returns sandbox id \"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd\"" Oct 9 01:10:05.366912 containerd[1498]: time="2024-10-09T01:10:05.365759603Z" level=info msg="CreateContainer within sandbox \"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:10:05.379069 containerd[1498]: time="2024-10-09T01:10:05.379006843Z" level=info msg="CreateContainer within sandbox \"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9bdb4db6a224fa8b0e64255f8eb891ff7a55ea1cfd75c9bdbb1fa169d4fbed9\"" Oct 9 01:10:05.379603 containerd[1498]: time="2024-10-09T01:10:05.379449607Z" level=info msg="StartContainer for \"d9bdb4db6a224fa8b0e64255f8eb891ff7a55ea1cfd75c9bdbb1fa169d4fbed9\"" Oct 9 01:10:05.409147 systemd[1]: Started cri-containerd-d9bdb4db6a224fa8b0e64255f8eb891ff7a55ea1cfd75c9bdbb1fa169d4fbed9.scope - libcontainer container d9bdb4db6a224fa8b0e64255f8eb891ff7a55ea1cfd75c9bdbb1fa169d4fbed9. Oct 9 01:10:05.437917 containerd[1498]: time="2024-10-09T01:10:05.437869933Z" level=info msg="StartContainer for \"d9bdb4db6a224fa8b0e64255f8eb891ff7a55ea1cfd75c9bdbb1fa169d4fbed9\" returns successfully" Oct 9 01:10:05.545439 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.YeHC0s.mount: Deactivated successfully. Oct 9 01:10:05.546198 systemd[1]: run-netns-cni\x2d0e997602\x2d29fa\x2db30a\x2dac75\x2d2f4179ee2cb1.mount: Deactivated successfully. Oct 9 01:10:06.058075 containerd[1498]: time="2024-10-09T01:10:06.057811260Z" level=info msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" Oct 9 01:10:06.069479 containerd[1498]: time="2024-10-09T01:10:06.069224683Z" level=info msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.113 [INFO][4323] k8s.go 608: Cleaning up netns ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.114 [INFO][4323] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" iface="eth0" netns="/var/run/netns/cni-b2800c53-fc49-08a6-6d62-9d903c30e8da" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.114 [INFO][4323] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" iface="eth0" netns="/var/run/netns/cni-b2800c53-fc49-08a6-6d62-9d903c30e8da" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.115 [INFO][4323] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" iface="eth0" netns="/var/run/netns/cni-b2800c53-fc49-08a6-6d62-9d903c30e8da" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.115 [INFO][4323] k8s.go 615: Releasing IP address(es) ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.115 [INFO][4323] utils.go 188: Calico CNI releasing IP address ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.147 [INFO][4345] ipam_plugin.go 417: Releasing address using handleID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.147 [INFO][4345] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.147 [INFO][4345] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.152 [WARNING][4345] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.152 [INFO][4345] ipam_plugin.go 445: Releasing address using workloadID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.153 [INFO][4345] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:06.160078 containerd[1498]: 2024-10-09 01:10:06.156 [INFO][4323] k8s.go 621: Teardown processing complete. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:06.163793 containerd[1498]: time="2024-10-09T01:10:06.162761924Z" level=info msg="TearDown network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" successfully" Oct 9 01:10:06.163793 containerd[1498]: time="2024-10-09T01:10:06.162789546Z" level=info msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" returns successfully" Oct 9 01:10:06.163793 containerd[1498]: time="2024-10-09T01:10:06.163780950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dvlf5,Uid:f4b14037-32ba-4566-b2be-c2127ceb8d7c,Namespace:kube-system,Attempt:1,}" Oct 9 01:10:06.166954 systemd[1]: run-netns-cni\x2db2800c53\x2dfc49\x2d08a6\x2d6d62\x2d9d903c30e8da.mount: Deactivated successfully. Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.126 [INFO][4336] k8s.go 608: Cleaning up netns ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.127 [INFO][4336] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" iface="eth0" netns="/var/run/netns/cni-f52acf91-7d79-a331-5319-c22dbc6c3e78" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.127 [INFO][4336] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" iface="eth0" netns="/var/run/netns/cni-f52acf91-7d79-a331-5319-c22dbc6c3e78" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.127 [INFO][4336] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" iface="eth0" netns="/var/run/netns/cni-f52acf91-7d79-a331-5319-c22dbc6c3e78" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.127 [INFO][4336] k8s.go 615: Releasing IP address(es) ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.127 [INFO][4336] utils.go 188: Calico CNI releasing IP address ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.149 [INFO][4349] ipam_plugin.go 417: Releasing address using handleID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.149 [INFO][4349] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.153 [INFO][4349] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.161 [WARNING][4349] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.161 [INFO][4349] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.170 [INFO][4349] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:06.186073 containerd[1498]: 2024-10-09 01:10:06.176 [INFO][4336] k8s.go 621: Teardown processing complete. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:06.188373 containerd[1498]: time="2024-10-09T01:10:06.186572142Z" level=info msg="TearDown network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" successfully" Oct 9 01:10:06.188373 containerd[1498]: time="2024-10-09T01:10:06.188078034Z" level=info msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" returns successfully" Oct 9 01:10:06.189791 containerd[1498]: time="2024-10-09T01:10:06.189188280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587766768d-6csts,Uid:f0511c8d-9ec1-40e7-adb4-51ba165ea4f7,Namespace:calico-system,Attempt:1,}" Oct 9 01:10:06.192386 systemd[1]: run-netns-cni\x2df52acf91\x2d7d79\x2da331\x2d5319\x2dc22dbc6c3e78.mount: Deactivated successfully. Oct 9 01:10:06.270914 kubelet[2753]: I1009 01:10:06.270884 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-sm4xg" podStartSLOduration=29.270847523 podStartE2EDuration="29.270847523s" podCreationTimestamp="2024-10-09 01:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:06.240991076 +0000 UTC m=+43.285653658" watchObservedRunningTime="2024-10-09 01:10:06.270847523 +0000 UTC m=+43.315510115" Oct 9 01:10:06.371593 systemd-networkd[1389]: calic750293b019: Link UP Oct 9 01:10:06.374871 systemd-networkd[1389]: calic750293b019: Gained carrier Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.220 [INFO][4359] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.243 [INFO][4359] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0 coredns-76f75df574- kube-system f4b14037-32ba-4566-b2be-c2127ceb8d7c 717 0 2024-10-09 01:09:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 coredns-76f75df574-dvlf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic750293b019 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.243 [INFO][4359] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.291 [INFO][4381] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" HandleID="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.307 [INFO][4381] ipam_plugin.go 270: Auto assigning IP ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" HandleID="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd380), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-2-50096a0261", "pod":"coredns-76f75df574-dvlf5", "timestamp":"2024-10-09 01:10:06.291504554 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.308 [INFO][4381] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.308 [INFO][4381] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.308 [INFO][4381] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.311 [INFO][4381] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.324 [INFO][4381] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.333 [INFO][4381] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.335 [INFO][4381] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.337 [INFO][4381] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.337 [INFO][4381] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.338 [INFO][4381] ipam.go 1685: Creating new handle: k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13 Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.341 [INFO][4381] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.348 [INFO][4381] ipam.go 1216: Successfully claimed IPs: [192.168.44.3/26] block=192.168.44.0/26 handle="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.348 [INFO][4381] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.3/26] handle="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.348 [INFO][4381] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:06.405483 containerd[1498]: 2024-10-09 01:10:06.361 [INFO][4381] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.3/26] IPv6=[] ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" HandleID="k8s-pod-network.5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.366 [INFO][4359] k8s.go 386: Populated endpoint ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f4b14037-32ba-4566-b2be-c2127ceb8d7c", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"coredns-76f75df574-dvlf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic750293b019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.366 [INFO][4359] k8s.go 387: Calico CNI using IPs: [192.168.44.3/32] ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.366 [INFO][4359] dataplane_linux.go 68: Setting the host side veth name to calic750293b019 ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.376 [INFO][4359] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.377 [INFO][4359] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f4b14037-32ba-4566-b2be-c2127ceb8d7c", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13", Pod:"coredns-76f75df574-dvlf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic750293b019", MAC:"32:7b:b8:2d:f5:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:06.406058 containerd[1498]: 2024-10-09 01:10:06.394 [INFO][4359] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13" Namespace="kube-system" Pod="coredns-76f75df574-dvlf5" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:06.459856 systemd-networkd[1389]: calib9819ff6c77: Link UP Oct 9 01:10:06.460408 systemd-networkd[1389]: calib9819ff6c77: Gained carrier Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.253 [INFO][4368] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.287 [INFO][4368] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0 calico-kube-controllers-587766768d- calico-system f0511c8d-9ec1-40e7-adb4-51ba165ea4f7 718 0 2024-10-09 01:09:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:587766768d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 calico-kube-controllers-587766768d-6csts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib9819ff6c77 [] []}} ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.287 [INFO][4368] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.407 [INFO][4394] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" HandleID="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.417 [INFO][4394] ipam_plugin.go 270: Auto assigning IP ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" HandleID="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004832e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-2-50096a0261", "pod":"calico-kube-controllers-587766768d-6csts", "timestamp":"2024-10-09 01:10:06.407799446 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.417 [INFO][4394] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.417 [INFO][4394] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.417 [INFO][4394] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.419 [INFO][4394] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.424 [INFO][4394] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.429 [INFO][4394] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.431 [INFO][4394] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.435 [INFO][4394] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.435 [INFO][4394] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.437 [INFO][4394] ipam.go 1685: Creating new handle: k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.441 [INFO][4394] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.447 [INFO][4394] ipam.go 1216: Successfully claimed IPs: [192.168.44.4/26] block=192.168.44.0/26 handle="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.448 [INFO][4394] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.4/26] handle="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.448 [INFO][4394] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:06.479340 containerd[1498]: 2024-10-09 01:10:06.448 [INFO][4394] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.4/26] IPv6=[] ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" HandleID="k8s-pod-network.554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.451 [INFO][4368] k8s.go 386: Populated endpoint ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0", GenerateName:"calico-kube-controllers-587766768d-", Namespace:"calico-system", SelfLink:"", UID:"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587766768d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"calico-kube-controllers-587766768d-6csts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9819ff6c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.452 [INFO][4368] k8s.go 387: Calico CNI using IPs: [192.168.44.4/32] ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.453 [INFO][4368] dataplane_linux.go 68: Setting the host side veth name to calib9819ff6c77 ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.460 [INFO][4368] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.461 [INFO][4368] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0", GenerateName:"calico-kube-controllers-587766768d-", Namespace:"calico-system", SelfLink:"", UID:"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587766768d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e", Pod:"calico-kube-controllers-587766768d-6csts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9819ff6c77", MAC:"46:bf:52:37:ed:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:06.480765 containerd[1498]: 2024-10-09 01:10:06.470 [INFO][4368] k8s.go 500: Wrote updated endpoint to datastore ContainerID="554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e" Namespace="calico-system" Pod="calico-kube-controllers-587766768d-6csts" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:06.487452 containerd[1498]: time="2024-10-09T01:10:06.485057450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:06.487452 containerd[1498]: time="2024-10-09T01:10:06.485105621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:06.487452 containerd[1498]: time="2024-10-09T01:10:06.485114907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:06.487452 containerd[1498]: time="2024-10-09T01:10:06.485191950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:06.515765 containerd[1498]: time="2024-10-09T01:10:06.515477435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:06.515765 containerd[1498]: time="2024-10-09T01:10:06.515519684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:06.515765 containerd[1498]: time="2024-10-09T01:10:06.515530855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:06.515765 containerd[1498]: time="2024-10-09T01:10:06.515613728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:06.538729 systemd[1]: Started cri-containerd-5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13.scope - libcontainer container 5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13. Oct 9 01:10:06.570862 systemd[1]: Started cri-containerd-554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e.scope - libcontainer container 554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e. Oct 9 01:10:06.612096 containerd[1498]: time="2024-10-09T01:10:06.611809937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dvlf5,Uid:f4b14037-32ba-4566-b2be-c2127ceb8d7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13\"" Oct 9 01:10:06.616762 containerd[1498]: time="2024-10-09T01:10:06.616597484Z" level=info msg="CreateContainer within sandbox \"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:10:06.634493 containerd[1498]: time="2024-10-09T01:10:06.633870058Z" level=info msg="CreateContainer within sandbox \"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3590d845e29c90536466fc8f872f697cc275adaa1666e636e7d005add112c765\"" Oct 9 01:10:06.635052 containerd[1498]: time="2024-10-09T01:10:06.634828462Z" level=info msg="StartContainer for \"3590d845e29c90536466fc8f872f697cc275adaa1666e636e7d005add112c765\"" Oct 9 01:10:06.657393 containerd[1498]: time="2024-10-09T01:10:06.657361434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587766768d-6csts,Uid:f0511c8d-9ec1-40e7-adb4-51ba165ea4f7,Namespace:calico-system,Attempt:1,} returns sandbox id \"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e\"" Oct 9 01:10:06.665210 systemd-networkd[1389]: cali65453058549: Gained IPv6LL Oct 9 01:10:06.682483 containerd[1498]: time="2024-10-09T01:10:06.682444630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:06.683201 containerd[1498]: time="2024-10-09T01:10:06.683052200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:10:06.684194 containerd[1498]: time="2024-10-09T01:10:06.684159139Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:06.685609 systemd[1]: Started cri-containerd-3590d845e29c90536466fc8f872f697cc275adaa1666e636e7d005add112c765.scope - libcontainer container 3590d845e29c90536466fc8f872f697cc275adaa1666e636e7d005add112c765. Oct 9 01:10:06.687632 containerd[1498]: time="2024-10-09T01:10:06.687604730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:06.688210 containerd[1498]: time="2024-10-09T01:10:06.688149634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.063741024s" Oct 9 01:10:06.688210 containerd[1498]: time="2024-10-09T01:10:06.688184268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:10:06.690117 containerd[1498]: time="2024-10-09T01:10:06.689739181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:10:06.693946 containerd[1498]: time="2024-10-09T01:10:06.693860578Z" level=info msg="CreateContainer within sandbox \"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:10:06.706232 containerd[1498]: time="2024-10-09T01:10:06.706189205Z" level=info msg="CreateContainer within sandbox \"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0f5b352ee47c8cfbf6835d4ee5b2fbf019edd611c70aaa84f690dcd2dedecef9\"" Oct 9 01:10:06.706920 containerd[1498]: time="2024-10-09T01:10:06.706803348Z" level=info msg="StartContainer for \"0f5b352ee47c8cfbf6835d4ee5b2fbf019edd611c70aaa84f690dcd2dedecef9\"" Oct 9 01:10:06.725276 containerd[1498]: time="2024-10-09T01:10:06.722954196Z" level=info msg="StartContainer for \"3590d845e29c90536466fc8f872f697cc275adaa1666e636e7d005add112c765\" returns successfully" Oct 9 01:10:06.743179 systemd[1]: Started cri-containerd-0f5b352ee47c8cfbf6835d4ee5b2fbf019edd611c70aaa84f690dcd2dedecef9.scope - libcontainer container 0f5b352ee47c8cfbf6835d4ee5b2fbf019edd611c70aaa84f690dcd2dedecef9. Oct 9 01:10:06.797831 containerd[1498]: time="2024-10-09T01:10:06.797790816Z" level=info msg="StartContainer for \"0f5b352ee47c8cfbf6835d4ee5b2fbf019edd611c70aaa84f690dcd2dedecef9\" returns successfully" Oct 9 01:10:07.229576 kubelet[2753]: I1009 01:10:07.228837 2753 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:10:07.242556 kubelet[2753]: I1009 01:10:07.242518 2753 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:10:07.255004 kubelet[2753]: I1009 01:10:07.254778 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dvlf5" podStartSLOduration=30.254534155 podStartE2EDuration="30.254534155s" podCreationTimestamp="2024-10-09 01:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:10:07.239494397 +0000 UTC m=+44.284156969" watchObservedRunningTime="2024-10-09 01:10:07.254534155 +0000 UTC m=+44.299196727" Oct 9 01:10:07.497264 systemd-networkd[1389]: calic750293b019: Gained IPv6LL Oct 9 01:10:08.330830 systemd-networkd[1389]: calib9819ff6c77: Gained IPv6LL Oct 9 01:10:09.409139 containerd[1498]: time="2024-10-09T01:10:09.408877187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:09.410304 containerd[1498]: time="2024-10-09T01:10:09.410216274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:10:09.412288 containerd[1498]: time="2024-10-09T01:10:09.412231519Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:09.413520 containerd[1498]: time="2024-10-09T01:10:09.413404797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:09.414243 containerd[1498]: time="2024-10-09T01:10:09.413956765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.724193209s" Oct 9 01:10:09.414243 containerd[1498]: time="2024-10-09T01:10:09.413998633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:10:09.432879 containerd[1498]: time="2024-10-09T01:10:09.432624320Z" level=info msg="CreateContainer within sandbox \"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:10:09.444325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003400006.mount: Deactivated successfully. Oct 9 01:10:09.445839 containerd[1498]: time="2024-10-09T01:10:09.445817730Z" level=info msg="CreateContainer within sandbox \"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d\"" Oct 9 01:10:09.446946 containerd[1498]: time="2024-10-09T01:10:09.446883716Z" level=info msg="StartContainer for \"c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d\"" Oct 9 01:10:09.476151 systemd[1]: Started cri-containerd-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d.scope - libcontainer container c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d. Oct 9 01:10:09.513295 containerd[1498]: time="2024-10-09T01:10:09.513253452Z" level=info msg="StartContainer for \"c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d\" returns successfully" Oct 9 01:10:10.259088 kubelet[2753]: I1009 01:10:10.258451 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9gmbj" podStartSLOduration=23.261762045 podStartE2EDuration="27.258398379s" podCreationTimestamp="2024-10-09 01:09:43 +0000 UTC" firstStartedPulling="2024-10-09 01:10:02.691807476 +0000 UTC m=+39.736470049" lastFinishedPulling="2024-10-09 01:10:06.688443811 +0000 UTC m=+43.733106383" observedRunningTime="2024-10-09 01:10:07.268424023 +0000 UTC m=+44.313086594" watchObservedRunningTime="2024-10-09 01:10:10.258398379 +0000 UTC m=+47.303060990" Oct 9 01:10:11.245537 kubelet[2753]: I1009 01:10:11.245495 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:10:11.770591 kubelet[2753]: I1009 01:10:11.770409 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:10:11.790558 kubelet[2753]: I1009 01:10:11.790511 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-587766768d-6csts" podStartSLOduration=26.040314618 podStartE2EDuration="28.790476531s" podCreationTimestamp="2024-10-09 01:09:43 +0000 UTC" firstStartedPulling="2024-10-09 01:10:06.664087868 +0000 UTC m=+43.708750440" lastFinishedPulling="2024-10-09 01:10:09.414249781 +0000 UTC m=+46.458912353" observedRunningTime="2024-10-09 01:10:10.262171025 +0000 UTC m=+47.306833627" watchObservedRunningTime="2024-10-09 01:10:11.790476531 +0000 UTC m=+48.835139113" Oct 9 01:10:12.567336 kernel: bpftool[4792]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:10:12.803790 systemd-networkd[1389]: vxlan.calico: Link UP Oct 9 01:10:12.803803 systemd-networkd[1389]: vxlan.calico: Gained carrier Oct 9 01:10:14.345209 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Oct 9 01:10:14.647569 kubelet[2753]: I1009 01:10:14.646953 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:10:14.668652 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.mLCPS0.mount: Deactivated successfully. Oct 9 01:10:23.051192 containerd[1498]: time="2024-10-09T01:10:23.051119816Z" level=info msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.126 [WARNING][4932] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fd8d9a3-75d9-4254-a06e-cd07f555d058", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd", Pod:"coredns-76f75df574-sm4xg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65453058549", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.126 [INFO][4932] k8s.go 608: Cleaning up netns ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.127 [INFO][4932] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" iface="eth0" netns="" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.127 [INFO][4932] k8s.go 615: Releasing IP address(es) ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.127 [INFO][4932] utils.go 188: Calico CNI releasing IP address ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.153 [INFO][4946] ipam_plugin.go 417: Releasing address using handleID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.154 [INFO][4946] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.154 [INFO][4946] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.160 [WARNING][4946] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.160 [INFO][4946] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.161 [INFO][4946] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.166708 containerd[1498]: 2024-10-09 01:10:23.164 [INFO][4932] k8s.go 621: Teardown processing complete. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.166708 containerd[1498]: time="2024-10-09T01:10:23.166728008Z" level=info msg="TearDown network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" successfully" Oct 9 01:10:23.168001 containerd[1498]: time="2024-10-09T01:10:23.166746211Z" level=info msg="StopPodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" returns successfully" Oct 9 01:10:23.168001 containerd[1498]: time="2024-10-09T01:10:23.167311613Z" level=info msg="RemovePodSandbox for \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" Oct 9 01:10:23.172425 containerd[1498]: time="2024-10-09T01:10:23.172399368Z" level=info msg="Forcibly stopping sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\"" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.201 [WARNING][4966] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fd8d9a3-75d9-4254-a06e-cd07f555d058", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"47968c1eb61736c4c1da98b8ecd41a6198d013d19b0e90f31755ea630a3ed4fd", Pod:"coredns-76f75df574-sm4xg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65453058549", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.201 [INFO][4966] k8s.go 608: Cleaning up netns ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.201 [INFO][4966] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" iface="eth0" netns="" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.201 [INFO][4966] k8s.go 615: Releasing IP address(es) ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.201 [INFO][4966] utils.go 188: Calico CNI releasing IP address ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.220 [INFO][4977] ipam_plugin.go 417: Releasing address using handleID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.221 [INFO][4977] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.221 [INFO][4977] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.225 [WARNING][4977] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.225 [INFO][4977] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" HandleID="k8s-pod-network.ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--sm4xg-eth0" Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.226 [INFO][4977] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.230146 containerd[1498]: 2024-10-09 01:10:23.228 [INFO][4966] k8s.go 621: Teardown processing complete. ContainerID="ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7" Oct 9 01:10:23.230673 containerd[1498]: time="2024-10-09T01:10:23.230640129Z" level=info msg="TearDown network for sandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" successfully" Oct 9 01:10:23.236001 containerd[1498]: time="2024-10-09T01:10:23.235967634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:10:23.236078 containerd[1498]: time="2024-10-09T01:10:23.236018800Z" level=info msg="RemovePodSandbox \"ea568378d6c2f8a6219cbca591a5d04939e1fd2bc2054dd87d8088f6d4e4def7\" returns successfully" Oct 9 01:10:23.236804 containerd[1498]: time="2024-10-09T01:10:23.236529789Z" level=info msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.263 [WARNING][4995] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a2e714-58d6-4153-a262-4f8ad2d40b26", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d", Pod:"csi-node-driver-9gmbj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0480d91811b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.264 [INFO][4995] k8s.go 608: Cleaning up netns ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.264 [INFO][4995] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" iface="eth0" netns="" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.264 [INFO][4995] k8s.go 615: Releasing IP address(es) ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.264 [INFO][4995] utils.go 188: Calico CNI releasing IP address ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.281 [INFO][5001] ipam_plugin.go 417: Releasing address using handleID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.281 [INFO][5001] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.281 [INFO][5001] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.288 [WARNING][5001] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.288 [INFO][5001] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.289 [INFO][5001] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.292898 containerd[1498]: 2024-10-09 01:10:23.291 [INFO][4995] k8s.go 621: Teardown processing complete. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.293326 containerd[1498]: time="2024-10-09T01:10:23.292931279Z" level=info msg="TearDown network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" successfully" Oct 9 01:10:23.293326 containerd[1498]: time="2024-10-09T01:10:23.292951016Z" level=info msg="StopPodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" returns successfully" Oct 9 01:10:23.293326 containerd[1498]: time="2024-10-09T01:10:23.293296785Z" level=info msg="RemovePodSandbox for \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" Oct 9 01:10:23.293326 containerd[1498]: time="2024-10-09T01:10:23.293316862Z" level=info msg="Forcibly stopping sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\"" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.324 [WARNING][5019] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a2e714-58d6-4153-a262-4f8ad2d40b26", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"78217c923aa722bd5ef6519c080dea1edafeba828f63b9ea74005145088b947d", Pod:"csi-node-driver-9gmbj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0480d91811b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.324 [INFO][5019] k8s.go 608: Cleaning up netns ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.324 [INFO][5019] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" iface="eth0" netns="" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.324 [INFO][5019] k8s.go 615: Releasing IP address(es) ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.324 [INFO][5019] utils.go 188: Calico CNI releasing IP address ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.341 [INFO][5025] ipam_plugin.go 417: Releasing address using handleID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.342 [INFO][5025] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.343 [INFO][5025] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.349 [WARNING][5025] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.350 [INFO][5025] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" HandleID="k8s-pod-network.006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Workload="ci--4116--0--0--2--50096a0261-k8s-csi--node--driver--9gmbj-eth0" Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.351 [INFO][5025] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.359239 containerd[1498]: 2024-10-09 01:10:23.356 [INFO][5019] k8s.go 621: Teardown processing complete. ContainerID="006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e" Oct 9 01:10:23.359239 containerd[1498]: time="2024-10-09T01:10:23.358353663Z" level=info msg="TearDown network for sandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" successfully" Oct 9 01:10:23.362622 containerd[1498]: time="2024-10-09T01:10:23.362589591Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:10:23.362773 containerd[1498]: time="2024-10-09T01:10:23.362703865Z" level=info msg="RemovePodSandbox \"006ab6cfda125b7e332114a6883081b61d82a695aa6ca7ecf6b0c5b59620fb3e\" returns successfully" Oct 9 01:10:23.363255 containerd[1498]: time="2024-10-09T01:10:23.363054091Z" level=info msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.389 [WARNING][5044] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0", GenerateName:"calico-kube-controllers-587766768d-", Namespace:"calico-system", SelfLink:"", UID:"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587766768d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e", Pod:"calico-kube-controllers-587766768d-6csts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9819ff6c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.389 [INFO][5044] k8s.go 608: Cleaning up netns ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.389 [INFO][5044] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" iface="eth0" netns="" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.389 [INFO][5044] k8s.go 615: Releasing IP address(es) ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.389 [INFO][5044] utils.go 188: Calico CNI releasing IP address ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.405 [INFO][5050] ipam_plugin.go 417: Releasing address using handleID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.405 [INFO][5050] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.405 [INFO][5050] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.410 [WARNING][5050] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.410 [INFO][5050] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.411 [INFO][5050] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.414348 containerd[1498]: 2024-10-09 01:10:23.412 [INFO][5044] k8s.go 621: Teardown processing complete. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.415754 containerd[1498]: time="2024-10-09T01:10:23.414375059Z" level=info msg="TearDown network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" successfully" Oct 9 01:10:23.415754 containerd[1498]: time="2024-10-09T01:10:23.414397371Z" level=info msg="StopPodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" returns successfully" Oct 9 01:10:23.415754 containerd[1498]: time="2024-10-09T01:10:23.414855862Z" level=info msg="RemovePodSandbox for \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" Oct 9 01:10:23.415754 containerd[1498]: time="2024-10-09T01:10:23.414896128Z" level=info msg="Forcibly stopping sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\"" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.441 [WARNING][5068] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0", GenerateName:"calico-kube-controllers-587766768d-", Namespace:"calico-system", SelfLink:"", UID:"f0511c8d-9ec1-40e7-adb4-51ba165ea4f7", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587766768d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"554efc1e4bc58046b9ddce9fb08e4444281324812f04a5ead73527ee9eade58e", Pod:"calico-kube-controllers-587766768d-6csts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9819ff6c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.441 [INFO][5068] k8s.go 608: Cleaning up netns ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.441 [INFO][5068] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" iface="eth0" netns="" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.441 [INFO][5068] k8s.go 615: Releasing IP address(es) ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.441 [INFO][5068] utils.go 188: Calico CNI releasing IP address ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.462 [INFO][5074] ipam_plugin.go 417: Releasing address using handleID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.463 [INFO][5074] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.463 [INFO][5074] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.467 [WARNING][5074] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.467 [INFO][5074] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" HandleID="k8s-pod-network.3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--kube--controllers--587766768d--6csts-eth0" Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.468 [INFO][5074] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.471879 containerd[1498]: 2024-10-09 01:10:23.470 [INFO][5068] k8s.go 621: Teardown processing complete. ContainerID="3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636" Oct 9 01:10:23.472301 containerd[1498]: time="2024-10-09T01:10:23.471885981Z" level=info msg="TearDown network for sandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" successfully" Oct 9 01:10:23.475649 containerd[1498]: time="2024-10-09T01:10:23.475603977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:10:23.475649 containerd[1498]: time="2024-10-09T01:10:23.475644052Z" level=info msg="RemovePodSandbox \"3746bd03329fcfc3ec6a2c760039f6bb0119ecf3f11af6afa5e6050634720636\" returns successfully" Oct 9 01:10:23.476101 containerd[1498]: time="2024-10-09T01:10:23.476053851Z" level=info msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.503 [WARNING][5092] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f4b14037-32ba-4566-b2be-c2127ceb8d7c", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13", Pod:"coredns-76f75df574-dvlf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic750293b019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.503 [INFO][5092] k8s.go 608: Cleaning up netns ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.503 [INFO][5092] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" iface="eth0" netns="" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.503 [INFO][5092] k8s.go 615: Releasing IP address(es) ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.503 [INFO][5092] utils.go 188: Calico CNI releasing IP address ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.520 [INFO][5100] ipam_plugin.go 417: Releasing address using handleID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.520 [INFO][5100] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.520 [INFO][5100] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.525 [WARNING][5100] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.526 [INFO][5100] ipam_plugin.go 445: Releasing address using workloadID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.527 [INFO][5100] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.530687 containerd[1498]: 2024-10-09 01:10:23.529 [INFO][5092] k8s.go 621: Teardown processing complete. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.531913 containerd[1498]: time="2024-10-09T01:10:23.530721297Z" level=info msg="TearDown network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" successfully" Oct 9 01:10:23.531913 containerd[1498]: time="2024-10-09T01:10:23.530742667Z" level=info msg="StopPodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" returns successfully" Oct 9 01:10:23.531913 containerd[1498]: time="2024-10-09T01:10:23.531583776Z" level=info msg="RemovePodSandbox for \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" Oct 9 01:10:23.531913 containerd[1498]: time="2024-10-09T01:10:23.531616517Z" level=info msg="Forcibly stopping sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\"" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.561 [WARNING][5118] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f4b14037-32ba-4566-b2be-c2127ceb8d7c", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"5ad1833a4348b95a478032e65d9f21fb4cfae611657562e4cf05bd2c725ffe13", Pod:"coredns-76f75df574-dvlf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic750293b019", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.562 [INFO][5118] k8s.go 608: Cleaning up netns ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.562 [INFO][5118] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" iface="eth0" netns="" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.562 [INFO][5118] k8s.go 615: Releasing IP address(es) ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.562 [INFO][5118] utils.go 188: Calico CNI releasing IP address ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.577 [INFO][5125] ipam_plugin.go 417: Releasing address using handleID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.577 [INFO][5125] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.577 [INFO][5125] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.581 [WARNING][5125] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.581 [INFO][5125] ipam_plugin.go 445: Releasing address using workloadID ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" HandleID="k8s-pod-network.289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Workload="ci--4116--0--0--2--50096a0261-k8s-coredns--76f75df574--dvlf5-eth0" Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.582 [INFO][5125] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:23.586688 containerd[1498]: 2024-10-09 01:10:23.584 [INFO][5118] k8s.go 621: Teardown processing complete. ContainerID="289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15" Oct 9 01:10:23.587081 containerd[1498]: time="2024-10-09T01:10:23.586711114Z" level=info msg="TearDown network for sandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" successfully" Oct 9 01:10:23.589764 containerd[1498]: time="2024-10-09T01:10:23.589738104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:10:23.589832 containerd[1498]: time="2024-10-09T01:10:23.589780393Z" level=info msg="RemovePodSandbox \"289156fa90180d571adc150f8150c2cc72a0e0de5700769ba380527a68d01f15\" returns successfully" Oct 9 01:10:34.983706 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.8cqn9s.mount: Deactivated successfully. Oct 9 01:10:37.616326 kubelet[2753]: I1009 01:10:37.616275 2753 topology_manager.go:215] "Topology Admit Handler" podUID="08938756-dea5-4145-8dff-972182c372a2" podNamespace="calico-apiserver" podName="calico-apiserver-6cff5b65f-qhfzw" Oct 9 01:10:37.618149 kubelet[2753]: I1009 01:10:37.618120 2753 topology_manager.go:215] "Topology Admit Handler" podUID="1346b2c9-772a-4a79-8b67-1543453cc252" podNamespace="calico-apiserver" podName="calico-apiserver-6cff5b65f-25jkk" Oct 9 01:10:37.639510 systemd[1]: Created slice kubepods-besteffort-pod1346b2c9_772a_4a79_8b67_1543453cc252.slice - libcontainer container kubepods-besteffort-pod1346b2c9_772a_4a79_8b67_1543453cc252.slice. Oct 9 01:10:37.656904 systemd[1]: Created slice kubepods-besteffort-pod08938756_dea5_4145_8dff_972182c372a2.slice - libcontainer container kubepods-besteffort-pod08938756_dea5_4145_8dff_972182c372a2.slice. Oct 9 01:10:37.764581 kubelet[2753]: I1009 01:10:37.764475 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxjp\" (UniqueName: \"kubernetes.io/projected/1346b2c9-772a-4a79-8b67-1543453cc252-kube-api-access-xlxjp\") pod \"calico-apiserver-6cff5b65f-25jkk\" (UID: \"1346b2c9-772a-4a79-8b67-1543453cc252\") " pod="calico-apiserver/calico-apiserver-6cff5b65f-25jkk" Oct 9 01:10:37.766018 kubelet[2753]: I1009 01:10:37.765973 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgtj\" (UniqueName: \"kubernetes.io/projected/08938756-dea5-4145-8dff-972182c372a2-kube-api-access-8qgtj\") pod \"calico-apiserver-6cff5b65f-qhfzw\" (UID: \"08938756-dea5-4145-8dff-972182c372a2\") " pod="calico-apiserver/calico-apiserver-6cff5b65f-qhfzw" Oct 9 01:10:37.766206 kubelet[2753]: I1009 01:10:37.766098 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1346b2c9-772a-4a79-8b67-1543453cc252-calico-apiserver-certs\") pod \"calico-apiserver-6cff5b65f-25jkk\" (UID: \"1346b2c9-772a-4a79-8b67-1543453cc252\") " pod="calico-apiserver/calico-apiserver-6cff5b65f-25jkk" Oct 9 01:10:37.766206 kubelet[2753]: I1009 01:10:37.766131 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08938756-dea5-4145-8dff-972182c372a2-calico-apiserver-certs\") pod \"calico-apiserver-6cff5b65f-qhfzw\" (UID: \"08938756-dea5-4145-8dff-972182c372a2\") " pod="calico-apiserver/calico-apiserver-6cff5b65f-qhfzw" Oct 9 01:10:37.957433 containerd[1498]: time="2024-10-09T01:10:37.957296595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cff5b65f-25jkk,Uid:1346b2c9-772a-4a79-8b67-1543453cc252,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:10:37.964083 containerd[1498]: time="2024-10-09T01:10:37.963994204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cff5b65f-qhfzw,Uid:08938756-dea5-4145-8dff-972182c372a2,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:10:38.109032 systemd-networkd[1389]: cali33a5ea0e1e9: Link UP Oct 9 01:10:38.110781 systemd-networkd[1389]: cali33a5ea0e1e9: Gained carrier Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.021 [INFO][5200] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0 calico-apiserver-6cff5b65f- calico-apiserver 08938756-dea5-4145-8dff-972182c372a2 880 0 2024-10-09 01:10:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cff5b65f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 calico-apiserver-6cff5b65f-qhfzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33a5ea0e1e9 [] []}} ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.021 [INFO][5200] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.058 [INFO][5219] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" HandleID="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.076 [INFO][5219] ipam_plugin.go 270: Auto assigning IP ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" HandleID="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-2-50096a0261", "pod":"calico-apiserver-6cff5b65f-qhfzw", "timestamp":"2024-10-09 01:10:38.054950436 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.076 [INFO][5219] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.076 [INFO][5219] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.076 [INFO][5219] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.078 [INFO][5219] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.082 [INFO][5219] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.086 [INFO][5219] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.088 [INFO][5219] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.090 [INFO][5219] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.090 [INFO][5219] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.091 [INFO][5219] ipam.go 1685: Creating new handle: k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2 Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.094 [INFO][5219] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5219] ipam.go 1216: Successfully claimed IPs: [192.168.44.5/26] block=192.168.44.0/26 handle="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5219] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.5/26] handle="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5219] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:38.134601 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5219] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.5/26] IPv6=[] ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" HandleID="k8s-pod-network.463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.102 [INFO][5200] k8s.go 386: Populated endpoint ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0", GenerateName:"calico-apiserver-6cff5b65f-", Namespace:"calico-apiserver", SelfLink:"", UID:"08938756-dea5-4145-8dff-972182c372a2", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cff5b65f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"calico-apiserver-6cff5b65f-qhfzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a5ea0e1e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.102 [INFO][5200] k8s.go 387: Calico CNI using IPs: [192.168.44.5/32] ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.102 [INFO][5200] dataplane_linux.go 68: Setting the host side veth name to cali33a5ea0e1e9 ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.110 [INFO][5200] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.112 [INFO][5200] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0", GenerateName:"calico-apiserver-6cff5b65f-", Namespace:"calico-apiserver", SelfLink:"", UID:"08938756-dea5-4145-8dff-972182c372a2", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cff5b65f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2", Pod:"calico-apiserver-6cff5b65f-qhfzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a5ea0e1e9", MAC:"96:7f:5b:84:9f:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:38.137675 containerd[1498]: 2024-10-09 01:10:38.124 [INFO][5200] k8s.go 500: Wrote updated endpoint to datastore ContainerID="463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-qhfzw" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--qhfzw-eth0" Oct 9 01:10:38.164393 systemd-networkd[1389]: cali2f0b3e3ee48: Link UP Oct 9 01:10:38.165503 systemd-networkd[1389]: cali2f0b3e3ee48: Gained carrier Oct 9 01:10:38.184693 containerd[1498]: time="2024-10-09T01:10:38.179279597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:38.184693 containerd[1498]: time="2024-10-09T01:10:38.180150107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:38.184693 containerd[1498]: time="2024-10-09T01:10:38.180168071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:38.184693 containerd[1498]: time="2024-10-09T01:10:38.181586985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.026 [INFO][5194] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0 calico-apiserver-6cff5b65f- calico-apiserver 1346b2c9-772a-4a79-8b67-1543453cc252 881 0 2024-10-09 01:10:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cff5b65f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-2-50096a0261 calico-apiserver-6cff5b65f-25jkk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f0b3e3ee48 [] []}} ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.027 [INFO][5194] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.077 [INFO][5223] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" HandleID="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.084 [INFO][5223] ipam_plugin.go 270: Auto assigning IP ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" HandleID="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290d80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-2-50096a0261", "pod":"calico-apiserver-6cff5b65f-25jkk", "timestamp":"2024-10-09 01:10:38.077209594 +0000 UTC"}, Hostname:"ci-4116-0-0-2-50096a0261", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.084 [INFO][5223] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5223] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.099 [INFO][5223] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-2-50096a0261' Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.103 [INFO][5223] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.113 [INFO][5223] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.131 [INFO][5223] ipam.go 489: Trying affinity for 192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.137 [INFO][5223] ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.142 [INFO][5223] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.142 [INFO][5223] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.144 [INFO][5223] ipam.go 1685: Creating new handle: k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00 Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.149 [INFO][5223] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.157 [INFO][5223] ipam.go 1216: Successfully claimed IPs: [192.168.44.6/26] block=192.168.44.0/26 handle="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.157 [INFO][5223] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.6/26] handle="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" host="ci-4116-0-0-2-50096a0261" Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.158 [INFO][5223] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:10:38.192379 containerd[1498]: 2024-10-09 01:10:38.158 [INFO][5223] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.6/26] IPv6=[] ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" HandleID="k8s-pod-network.a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Workload="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.161 [INFO][5194] k8s.go 386: Populated endpoint ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0", GenerateName:"calico-apiserver-6cff5b65f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1346b2c9-772a-4a79-8b67-1543453cc252", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cff5b65f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"", Pod:"calico-apiserver-6cff5b65f-25jkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f0b3e3ee48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.161 [INFO][5194] k8s.go 387: Calico CNI using IPs: [192.168.44.6/32] ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.161 [INFO][5194] dataplane_linux.go 68: Setting the host side veth name to cali2f0b3e3ee48 ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.165 [INFO][5194] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.166 [INFO][5194] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0", GenerateName:"calico-apiserver-6cff5b65f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1346b2c9-772a-4a79-8b67-1543453cc252", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cff5b65f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-2-50096a0261", ContainerID:"a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00", Pod:"calico-apiserver-6cff5b65f-25jkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f0b3e3ee48", MAC:"72:5a:4a:7a:f3:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:10:38.193313 containerd[1498]: 2024-10-09 01:10:38.186 [INFO][5194] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00" Namespace="calico-apiserver" Pod="calico-apiserver-6cff5b65f-25jkk" WorkloadEndpoint="ci--4116--0--0--2--50096a0261-k8s-calico--apiserver--6cff5b65f--25jkk-eth0" Oct 9 01:10:38.215931 systemd[1]: Started cri-containerd-463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2.scope - libcontainer container 463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2. Oct 9 01:10:38.226463 containerd[1498]: time="2024-10-09T01:10:38.226120269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:10:38.226463 containerd[1498]: time="2024-10-09T01:10:38.226179219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:10:38.226463 containerd[1498]: time="2024-10-09T01:10:38.226189820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:38.226463 containerd[1498]: time="2024-10-09T01:10:38.226302402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:10:38.260188 systemd[1]: Started cri-containerd-a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00.scope - libcontainer container a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00. Oct 9 01:10:38.282569 containerd[1498]: time="2024-10-09T01:10:38.282369902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cff5b65f-qhfzw,Uid:08938756-dea5-4145-8dff-972182c372a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2\"" Oct 9 01:10:38.285390 containerd[1498]: time="2024-10-09T01:10:38.285285887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:10:38.330412 containerd[1498]: time="2024-10-09T01:10:38.330367514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cff5b65f-25jkk,Uid:1346b2c9-772a-4a79-8b67-1543453cc252,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00\"" Oct 9 01:10:39.305395 systemd-networkd[1389]: cali2f0b3e3ee48: Gained IPv6LL Oct 9 01:10:39.689180 systemd-networkd[1389]: cali33a5ea0e1e9: Gained IPv6LL Oct 9 01:10:41.044753 containerd[1498]: time="2024-10-09T01:10:41.044707414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:41.045764 containerd[1498]: time="2024-10-09T01:10:41.045636987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:10:41.046665 containerd[1498]: time="2024-10-09T01:10:41.046626933Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:41.048397 containerd[1498]: time="2024-10-09T01:10:41.048376984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:41.048977 containerd[1498]: time="2024-10-09T01:10:41.048820701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.763505417s" Oct 9 01:10:41.048977 containerd[1498]: time="2024-10-09T01:10:41.048846229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:10:41.050933 containerd[1498]: time="2024-10-09T01:10:41.049405053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:10:41.050933 containerd[1498]: time="2024-10-09T01:10:41.050624843Z" level=info msg="CreateContainer within sandbox \"463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:10:41.069669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864557195.mount: Deactivated successfully. Oct 9 01:10:41.073514 containerd[1498]: time="2024-10-09T01:10:41.073486264Z" level=info msg="CreateContainer within sandbox \"463009739b59ac6808008cbdf97122f3aad69a138a7643eba144c94154ae97a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3ac0b22259494dcdc0e79bcfaa5883d431187fd6aaee305a780052cca595a84\"" Oct 9 01:10:41.073814 containerd[1498]: time="2024-10-09T01:10:41.073798303Z" level=info msg="StartContainer for \"f3ac0b22259494dcdc0e79bcfaa5883d431187fd6aaee305a780052cca595a84\"" Oct 9 01:10:41.103178 systemd[1]: Started cri-containerd-f3ac0b22259494dcdc0e79bcfaa5883d431187fd6aaee305a780052cca595a84.scope - libcontainer container f3ac0b22259494dcdc0e79bcfaa5883d431187fd6aaee305a780052cca595a84. Oct 9 01:10:41.138062 containerd[1498]: time="2024-10-09T01:10:41.137959345Z" level=info msg="StartContainer for \"f3ac0b22259494dcdc0e79bcfaa5883d431187fd6aaee305a780052cca595a84\" returns successfully" Oct 9 01:10:41.343110 kubelet[2753]: I1009 01:10:41.342434 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cff5b65f-qhfzw" podStartSLOduration=1.577512738 podStartE2EDuration="4.342400952s" podCreationTimestamp="2024-10-09 01:10:37 +0000 UTC" firstStartedPulling="2024-10-09 01:10:38.284408243 +0000 UTC m=+75.329070816" lastFinishedPulling="2024-10-09 01:10:41.049296459 +0000 UTC m=+78.093959030" observedRunningTime="2024-10-09 01:10:41.340955486 +0000 UTC m=+78.385618058" watchObservedRunningTime="2024-10-09 01:10:41.342400952 +0000 UTC m=+78.387063524" Oct 9 01:10:41.431667 containerd[1498]: time="2024-10-09T01:10:41.431623685Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:10:41.432806 containerd[1498]: time="2024-10-09T01:10:41.432750138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 01:10:41.434468 containerd[1498]: time="2024-10-09T01:10:41.434444574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 385.018221ms" Oct 9 01:10:41.434508 containerd[1498]: time="2024-10-09T01:10:41.434471314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:10:41.435926 containerd[1498]: time="2024-10-09T01:10:41.435716983Z" level=info msg="CreateContainer within sandbox \"a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:10:41.453273 containerd[1498]: time="2024-10-09T01:10:41.453236351Z" level=info msg="CreateContainer within sandbox \"a4ce0cb17e3924a7c02bf70facb0e6ab16d62dbcb8938d26d20922f580031c00\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91340328b09d61bcc266700e5a21a554afd1cc59e5b4a496d31ba4da308dc4ef\"" Oct 9 01:10:41.454646 containerd[1498]: time="2024-10-09T01:10:41.454589963Z" level=info msg="StartContainer for \"91340328b09d61bcc266700e5a21a554afd1cc59e5b4a496d31ba4da308dc4ef\"" Oct 9 01:10:41.484138 systemd[1]: Started cri-containerd-91340328b09d61bcc266700e5a21a554afd1cc59e5b4a496d31ba4da308dc4ef.scope - libcontainer container 91340328b09d61bcc266700e5a21a554afd1cc59e5b4a496d31ba4da308dc4ef. Oct 9 01:10:41.541192 containerd[1498]: time="2024-10-09T01:10:41.541017274Z" level=info msg="StartContainer for \"91340328b09d61bcc266700e5a21a554afd1cc59e5b4a496d31ba4da308dc4ef\" returns successfully" Oct 9 01:10:42.345056 kubelet[2753]: I1009 01:10:42.344988 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cff5b65f-25jkk" podStartSLOduration=2.242082539 podStartE2EDuration="5.344949399s" podCreationTimestamp="2024-10-09 01:10:37 +0000 UTC" firstStartedPulling="2024-10-09 01:10:38.331778003 +0000 UTC m=+75.376440574" lastFinishedPulling="2024-10-09 01:10:41.434644862 +0000 UTC m=+78.479307434" observedRunningTime="2024-10-09 01:10:42.344554313 +0000 UTC m=+79.389216895" watchObservedRunningTime="2024-10-09 01:10:42.344949399 +0000 UTC m=+79.389611970" Oct 9 01:11:14.670365 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.A7e5fF.mount: Deactivated successfully. Oct 9 01:11:35.030465 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.0VeUZy.mount: Deactivated successfully. Oct 9 01:12:04.983320 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.SJQupf.mount: Deactivated successfully. Oct 9 01:12:35.019406 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.ObCGnA.mount: Deactivated successfully. Oct 9 01:12:44.670959 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.I99Vfs.mount: Deactivated successfully. Oct 9 01:13:14.559701 update_engine[1486]: I20241009 01:13:14.559595 1486 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 9 01:13:14.563719 update_engine[1486]: I20241009 01:13:14.559794 1486 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 9 01:13:14.564105 update_engine[1486]: I20241009 01:13:14.564074 1486 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 9 01:13:14.564883 update_engine[1486]: I20241009 01:13:14.564862 1486 omaha_request_params.cc:62] Current group set to alpha Oct 9 01:13:14.565057 update_engine[1486]: I20241009 01:13:14.564987 1486 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 9 01:13:14.565057 update_engine[1486]: I20241009 01:13:14.565001 1486 update_attempter.cc:643] Scheduling an action processor start. Oct 9 01:13:14.565057 update_engine[1486]: I20241009 01:13:14.565042 1486 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 01:13:14.565151 update_engine[1486]: I20241009 01:13:14.565084 1486 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 9 01:13:14.565175 update_engine[1486]: I20241009 01:13:14.565150 1486 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 01:13:14.565175 update_engine[1486]: I20241009 01:13:14.565160 1486 omaha_request_action.cc:272] Request: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: Oct 9 01:13:14.565175 update_engine[1486]: I20241009 01:13:14.565168 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:13:14.582166 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 9 01:13:14.583453 update_engine[1486]: I20241009 01:13:14.583418 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:13:14.583744 update_engine[1486]: I20241009 01:13:14.583694 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:13:14.586336 update_engine[1486]: E20241009 01:13:14.586299 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:13:14.586373 update_engine[1486]: I20241009 01:13:14.586362 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 9 01:13:24.447232 update_engine[1486]: I20241009 01:13:24.447140 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:13:24.447650 update_engine[1486]: I20241009 01:13:24.447463 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:13:24.447863 update_engine[1486]: I20241009 01:13:24.447824 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:13:24.448449 update_engine[1486]: E20241009 01:13:24.448417 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:13:24.448500 update_engine[1486]: I20241009 01:13:24.448462 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 9 01:13:34.446901 update_engine[1486]: I20241009 01:13:34.446823 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:13:34.448223 update_engine[1486]: I20241009 01:13:34.447105 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:13:34.448223 update_engine[1486]: I20241009 01:13:34.447335 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:13:34.448223 update_engine[1486]: E20241009 01:13:34.447908 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:13:34.448223 update_engine[1486]: I20241009 01:13:34.447952 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 9 01:13:44.442875 update_engine[1486]: I20241009 01:13:44.442790 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:13:44.443424 update_engine[1486]: I20241009 01:13:44.443094 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:13:44.443424 update_engine[1486]: I20241009 01:13:44.443359 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:13:44.444126 update_engine[1486]: E20241009 01:13:44.444088 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:13:44.444187 update_engine[1486]: I20241009 01:13:44.444140 1486 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 01:13:44.444187 update_engine[1486]: I20241009 01:13:44.444151 1486 omaha_request_action.cc:617] Omaha request response: Oct 9 01:13:44.444257 update_engine[1486]: E20241009 01:13:44.444242 1486 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 9 01:13:44.444286 update_engine[1486]: I20241009 01:13:44.444263 1486 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 9 01:13:44.444286 update_engine[1486]: I20241009 01:13:44.444271 1486 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:13:44.444286 update_engine[1486]: I20241009 01:13:44.444278 1486 update_attempter.cc:306] Processing Done. Oct 9 01:13:44.444358 update_engine[1486]: E20241009 01:13:44.444296 1486 update_attempter.cc:619] Update failed. Oct 9 01:13:44.444358 update_engine[1486]: I20241009 01:13:44.444304 1486 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 9 01:13:44.444358 update_engine[1486]: I20241009 01:13:44.444311 1486 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 9 01:13:44.444358 update_engine[1486]: I20241009 01:13:44.444319 1486 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 9 01:13:44.444455 update_engine[1486]: I20241009 01:13:44.444391 1486 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 01:13:44.444455 update_engine[1486]: I20241009 01:13:44.444413 1486 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 01:13:44.444455 update_engine[1486]: I20241009 01:13:44.444420 1486 omaha_request_action.cc:272] Request: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: Oct 9 01:13:44.444455 update_engine[1486]: I20241009 01:13:44.444429 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:13:44.444665 update_engine[1486]: I20241009 01:13:44.444571 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:13:44.444850 update_engine[1486]: I20241009 01:13:44.444715 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:13:44.445217 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 9 01:13:44.445496 update_engine[1486]: E20241009 01:13:44.445439 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:13:44.445496 update_engine[1486]: I20241009 01:13:44.445485 1486 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445494 1486 omaha_request_action.cc:617] Omaha request response: Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445504 1486 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445513 1486 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445519 1486 update_attempter.cc:306] Processing Done. Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445526 1486 update_attempter.cc:310] Error event sent. Oct 9 01:13:44.445554 update_engine[1486]: I20241009 01:13:44.445537 1486 update_check_scheduler.cc:74] Next update check in 47m14s Oct 9 01:13:44.445836 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 9 01:14:13.629266 systemd[1]: Started sshd@7-188.245.175.223:22-139.178.68.195:56310.service - OpenSSH per-connection server daemon (139.178.68.195:56310). Oct 9 01:14:14.640440 sshd[5971]: Accepted publickey for core from 139.178.68.195 port 56310 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:14.644615 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:14.651331 systemd-logind[1482]: New session 8 of user core. Oct 9 01:14:14.657383 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:14:15.727960 sshd[5971]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:15.732133 systemd[1]: sshd@7-188.245.175.223:22-139.178.68.195:56310.service: Deactivated successfully. Oct 9 01:14:15.734712 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:14:15.737786 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:14:15.739138 systemd-logind[1482]: Removed session 8. Oct 9 01:14:20.902301 systemd[1]: Started sshd@8-188.245.175.223:22-139.178.68.195:47180.service - OpenSSH per-connection server daemon (139.178.68.195:47180). Oct 9 01:14:21.888345 sshd[6008]: Accepted publickey for core from 139.178.68.195 port 47180 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:21.890115 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:21.894534 systemd-logind[1482]: New session 9 of user core. Oct 9 01:14:21.899155 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:14:22.644526 sshd[6008]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:22.648598 systemd[1]: sshd@8-188.245.175.223:22-139.178.68.195:47180.service: Deactivated successfully. Oct 9 01:14:22.650668 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:14:22.652077 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:14:22.653494 systemd-logind[1482]: Removed session 9. Oct 9 01:14:27.813889 systemd[1]: Started sshd@9-188.245.175.223:22-139.178.68.195:47196.service - OpenSSH per-connection server daemon (139.178.68.195:47196). Oct 9 01:14:28.817232 sshd[6029]: Accepted publickey for core from 139.178.68.195 port 47196 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:28.818825 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:28.823927 systemd-logind[1482]: New session 10 of user core. Oct 9 01:14:28.827141 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:14:29.563966 sshd[6029]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:29.566693 systemd[1]: sshd@9-188.245.175.223:22-139.178.68.195:47196.service: Deactivated successfully. Oct 9 01:14:29.568591 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:14:29.569767 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:14:29.570838 systemd-logind[1482]: Removed session 10. Oct 9 01:14:34.733680 systemd[1]: Started sshd@10-188.245.175.223:22-139.178.68.195:38376.service - OpenSSH per-connection server daemon (139.178.68.195:38376). Oct 9 01:14:35.024247 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.lshMBf.mount: Deactivated successfully. Oct 9 01:14:35.727118 sshd[6044]: Accepted publickey for core from 139.178.68.195 port 38376 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:35.731968 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:35.737087 systemd-logind[1482]: New session 11 of user core. Oct 9 01:14:35.739280 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:14:36.488553 sshd[6044]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:36.492983 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:14:36.494481 systemd[1]: sshd@10-188.245.175.223:22-139.178.68.195:38376.service: Deactivated successfully. Oct 9 01:14:36.499134 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:14:36.500892 systemd-logind[1482]: Removed session 11. Oct 9 01:14:41.659333 systemd[1]: Started sshd@11-188.245.175.223:22-139.178.68.195:39310.service - OpenSSH per-connection server daemon (139.178.68.195:39310). Oct 9 01:14:42.654747 sshd[6107]: Accepted publickey for core from 139.178.68.195 port 39310 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:42.656403 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:42.661354 systemd-logind[1482]: New session 12 of user core. Oct 9 01:14:42.665190 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:14:43.404838 sshd[6107]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:43.407959 systemd[1]: sshd@11-188.245.175.223:22-139.178.68.195:39310.service: Deactivated successfully. Oct 9 01:14:43.411063 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:14:43.411923 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:14:43.412906 systemd-logind[1482]: Removed session 12. Oct 9 01:14:48.577305 systemd[1]: Started sshd@12-188.245.175.223:22-139.178.68.195:39314.service - OpenSSH per-connection server daemon (139.178.68.195:39314). Oct 9 01:14:49.580736 sshd[6146]: Accepted publickey for core from 139.178.68.195 port 39314 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:49.582255 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:49.587217 systemd-logind[1482]: New session 13 of user core. Oct 9 01:14:49.594269 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:14:50.350801 sshd[6146]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:50.353755 systemd[1]: sshd@12-188.245.175.223:22-139.178.68.195:39314.service: Deactivated successfully. Oct 9 01:14:50.355984 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:14:50.358109 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:14:50.359307 systemd-logind[1482]: Removed session 13. Oct 9 01:14:55.537355 systemd[1]: Started sshd@13-188.245.175.223:22-139.178.68.195:52518.service - OpenSSH per-connection server daemon (139.178.68.195:52518). Oct 9 01:14:56.609775 sshd[6172]: Accepted publickey for core from 139.178.68.195 port 52518 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:14:56.611818 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:14:56.617466 systemd-logind[1482]: New session 14 of user core. Oct 9 01:14:56.622191 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:14:57.408550 sshd[6172]: pam_unix(sshd:session): session closed for user core Oct 9 01:14:57.411899 systemd[1]: sshd@13-188.245.175.223:22-139.178.68.195:52518.service: Deactivated successfully. Oct 9 01:14:57.414321 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:14:57.416242 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:14:57.418248 systemd-logind[1482]: Removed session 14. Oct 9 01:15:02.586301 systemd[1]: Started sshd@14-188.245.175.223:22-139.178.68.195:53814.service - OpenSSH per-connection server daemon (139.178.68.195:53814). Oct 9 01:15:03.640230 sshd[6191]: Accepted publickey for core from 139.178.68.195 port 53814 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:03.641910 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:03.646123 systemd-logind[1482]: New session 15 of user core. Oct 9 01:15:03.651140 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:15:04.441631 sshd[6191]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:04.445676 systemd[1]: sshd@14-188.245.175.223:22-139.178.68.195:53814.service: Deactivated successfully. Oct 9 01:15:04.447563 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:15:04.448386 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:15:04.449864 systemd-logind[1482]: Removed session 15. Oct 9 01:15:09.629698 systemd[1]: Started sshd@15-188.245.175.223:22-139.178.68.195:53820.service - OpenSSH per-connection server daemon (139.178.68.195:53820). Oct 9 01:15:10.738148 sshd[6233]: Accepted publickey for core from 139.178.68.195 port 53820 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:10.740985 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:10.749383 systemd-logind[1482]: New session 16 of user core. Oct 9 01:15:10.755286 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:15:11.577000 sshd[6233]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:11.579819 systemd[1]: sshd@15-188.245.175.223:22-139.178.68.195:53820.service: Deactivated successfully. Oct 9 01:15:11.582396 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:15:11.584258 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:15:11.585732 systemd-logind[1482]: Removed session 16. Oct 9 01:15:14.690319 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.AnA0ol.mount: Deactivated successfully. Oct 9 01:15:16.771294 systemd[1]: Started sshd@16-188.245.175.223:22-139.178.68.195:51414.service - OpenSSH per-connection server daemon (139.178.68.195:51414). Oct 9 01:15:17.858193 sshd[6266]: Accepted publickey for core from 139.178.68.195 port 51414 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:17.862013 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:17.866957 systemd-logind[1482]: New session 17 of user core. Oct 9 01:15:17.872199 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:15:18.662816 sshd[6266]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:18.667519 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:15:18.668624 systemd[1]: sshd@16-188.245.175.223:22-139.178.68.195:51414.service: Deactivated successfully. Oct 9 01:15:18.672964 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:15:18.677405 systemd-logind[1482]: Removed session 17. Oct 9 01:15:23.851277 systemd[1]: Started sshd@17-188.245.175.223:22-139.178.68.195:35794.service - OpenSSH per-connection server daemon (139.178.68.195:35794). Oct 9 01:15:24.913203 sshd[6289]: Accepted publickey for core from 139.178.68.195 port 35794 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:24.914857 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:24.920240 systemd-logind[1482]: New session 18 of user core. Oct 9 01:15:24.924174 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:15:25.693320 sshd[6289]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:25.697757 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:15:25.698555 systemd[1]: sshd@17-188.245.175.223:22-139.178.68.195:35794.service: Deactivated successfully. Oct 9 01:15:25.701863 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:15:25.702947 systemd-logind[1482]: Removed session 18. Oct 9 01:15:30.882226 systemd[1]: Started sshd@18-188.245.175.223:22-139.178.68.195:55014.service - OpenSSH per-connection server daemon (139.178.68.195:55014). Oct 9 01:15:31.997837 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 55014 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:31.999548 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:32.004955 systemd-logind[1482]: New session 19 of user core. Oct 9 01:15:32.012191 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:15:32.812062 sshd[6308]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:32.815301 systemd[1]: sshd@18-188.245.175.223:22-139.178.68.195:55014.service: Deactivated successfully. Oct 9 01:15:32.817817 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:15:32.820189 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:15:32.821819 systemd-logind[1482]: Removed session 19. Oct 9 01:15:34.987924 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.RZochM.mount: Deactivated successfully. Oct 9 01:15:37.999325 systemd[1]: Started sshd@19-188.245.175.223:22-139.178.68.195:55020.service - OpenSSH per-connection server daemon (139.178.68.195:55020). Oct 9 01:15:39.101229 sshd[6368]: Accepted publickey for core from 139.178.68.195 port 55020 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:39.104140 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:39.113171 systemd-logind[1482]: New session 20 of user core. Oct 9 01:15:39.118268 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:15:39.914239 sshd[6368]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:39.919046 systemd[1]: sshd@19-188.245.175.223:22-139.178.68.195:55020.service: Deactivated successfully. Oct 9 01:15:39.921247 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:15:39.922168 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:15:39.923493 systemd-logind[1482]: Removed session 20. Oct 9 01:15:45.087653 systemd[1]: Started sshd@20-188.245.175.223:22-139.178.68.195:37688.service - OpenSSH per-connection server daemon (139.178.68.195:37688). Oct 9 01:15:46.099837 sshd[6408]: Accepted publickey for core from 139.178.68.195 port 37688 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:46.101370 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:46.105190 systemd-logind[1482]: New session 21 of user core. Oct 9 01:15:46.109188 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:15:46.881973 sshd[6408]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:46.885784 systemd[1]: sshd@20-188.245.175.223:22-139.178.68.195:37688.service: Deactivated successfully. Oct 9 01:15:46.887999 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:15:46.888911 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:15:46.890086 systemd-logind[1482]: Removed session 21. Oct 9 01:15:52.062301 systemd[1]: Started sshd@21-188.245.175.223:22-139.178.68.195:47576.service - OpenSSH per-connection server daemon (139.178.68.195:47576). Oct 9 01:15:53.057339 sshd[6426]: Accepted publickey for core from 139.178.68.195 port 47576 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:15:53.060732 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:15:53.065657 systemd-logind[1482]: New session 22 of user core. Oct 9 01:15:53.069190 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:15:53.797117 sshd[6426]: pam_unix(sshd:session): session closed for user core Oct 9 01:15:53.799980 systemd[1]: sshd@21-188.245.175.223:22-139.178.68.195:47576.service: Deactivated successfully. Oct 9 01:15:53.802359 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:15:53.804075 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:15:53.805437 systemd-logind[1482]: Removed session 22. Oct 9 01:15:58.987523 systemd[1]: Started sshd@22-188.245.175.223:22-139.178.68.195:47588.service - OpenSSH per-connection server daemon (139.178.68.195:47588). Oct 9 01:16:00.041647 sshd[6443]: Accepted publickey for core from 139.178.68.195 port 47588 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:00.043661 sshd[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:00.049782 systemd-logind[1482]: New session 23 of user core. Oct 9 01:16:00.057296 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:16:00.832006 sshd[6443]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:00.835360 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:16:00.836292 systemd[1]: sshd@22-188.245.175.223:22-139.178.68.195:47588.service: Deactivated successfully. Oct 9 01:16:00.838667 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:16:00.839508 systemd-logind[1482]: Removed session 23. Oct 9 01:16:06.030270 systemd[1]: Started sshd@23-188.245.175.223:22-139.178.68.195:45310.service - OpenSSH per-connection server daemon (139.178.68.195:45310). Oct 9 01:16:07.157228 sshd[6483]: Accepted publickey for core from 139.178.68.195 port 45310 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:07.159430 sshd[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:07.164599 systemd-logind[1482]: New session 24 of user core. Oct 9 01:16:07.168170 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:16:07.948858 sshd[6483]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:07.951826 systemd[1]: sshd@23-188.245.175.223:22-139.178.68.195:45310.service: Deactivated successfully. Oct 9 01:16:07.953705 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:16:07.955917 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:16:07.957236 systemd-logind[1482]: Removed session 24. Oct 9 01:16:13.127992 systemd[1]: Started sshd@24-188.245.175.223:22-139.178.68.195:52220.service - OpenSSH per-connection server daemon (139.178.68.195:52220). Oct 9 01:16:14.132689 sshd[6505]: Accepted publickey for core from 139.178.68.195 port 52220 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:14.134431 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:14.138846 systemd-logind[1482]: New session 25 of user core. Oct 9 01:16:14.145177 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:16:14.666795 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.pMbDge.mount: Deactivated successfully. Oct 9 01:16:14.918069 sshd[6505]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:14.922847 systemd[1]: sshd@24-188.245.175.223:22-139.178.68.195:52220.service: Deactivated successfully. Oct 9 01:16:14.925520 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:16:14.926746 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:16:14.927916 systemd-logind[1482]: Removed session 25. Oct 9 01:16:20.112194 systemd[1]: Started sshd@25-188.245.175.223:22-139.178.68.195:52236.service - OpenSSH per-connection server daemon (139.178.68.195:52236). Oct 9 01:16:21.195307 sshd[6538]: Accepted publickey for core from 139.178.68.195 port 52236 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:21.197312 sshd[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:21.203053 systemd-logind[1482]: New session 26 of user core. Oct 9 01:16:21.207153 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 01:16:22.001007 sshd[6538]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:22.005737 systemd[1]: sshd@25-188.245.175.223:22-139.178.68.195:52236.service: Deactivated successfully. Oct 9 01:16:22.008232 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 01:16:22.009069 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. Oct 9 01:16:22.010196 systemd-logind[1482]: Removed session 26. Oct 9 01:16:27.171201 systemd[1]: Started sshd@26-188.245.175.223:22-139.178.68.195:52240.service - OpenSSH per-connection server daemon (139.178.68.195:52240). Oct 9 01:16:28.161888 sshd[6560]: Accepted publickey for core from 139.178.68.195 port 52240 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:28.163644 sshd[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:28.168810 systemd-logind[1482]: New session 27 of user core. Oct 9 01:16:28.172238 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 01:16:28.912767 sshd[6560]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:28.917508 systemd[1]: sshd@26-188.245.175.223:22-139.178.68.195:52240.service: Deactivated successfully. Oct 9 01:16:28.920274 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:16:28.921458 systemd-logind[1482]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:16:28.922890 systemd-logind[1482]: Removed session 27. Oct 9 01:16:34.098359 systemd[1]: Started sshd@27-188.245.175.223:22-139.178.68.195:49064.service - OpenSSH per-connection server daemon (139.178.68.195:49064). Oct 9 01:16:34.984557 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.AyDv0R.mount: Deactivated successfully. Oct 9 01:16:35.146824 sshd[6591]: Accepted publickey for core from 139.178.68.195 port 49064 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:35.149440 sshd[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:35.154802 systemd-logind[1482]: New session 28 of user core. Oct 9 01:16:35.161259 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 01:16:35.953785 sshd[6591]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:35.957581 systemd[1]: sshd@27-188.245.175.223:22-139.178.68.195:49064.service: Deactivated successfully. Oct 9 01:16:35.960681 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 01:16:35.962377 systemd-logind[1482]: Session 28 logged out. Waiting for processes to exit. Oct 9 01:16:35.963563 systemd-logind[1482]: Removed session 28. Oct 9 01:16:41.144868 systemd[1]: Started sshd@28-188.245.175.223:22-139.178.68.195:47036.service - OpenSSH per-connection server daemon (139.178.68.195:47036). Oct 9 01:16:42.277953 sshd[6655]: Accepted publickey for core from 139.178.68.195 port 47036 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:42.279720 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:42.284147 systemd-logind[1482]: New session 29 of user core. Oct 9 01:16:42.290170 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 9 01:16:43.126250 sshd[6655]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:43.129905 systemd-logind[1482]: Session 29 logged out. Waiting for processes to exit. Oct 9 01:16:43.130427 systemd[1]: sshd@28-188.245.175.223:22-139.178.68.195:47036.service: Deactivated successfully. Oct 9 01:16:43.132693 systemd[1]: session-29.scope: Deactivated successfully. Oct 9 01:16:43.133481 systemd-logind[1482]: Removed session 29. Oct 9 01:16:48.314380 systemd[1]: Started sshd@29-188.245.175.223:22-139.178.68.195:47042.service - OpenSSH per-connection server daemon (139.178.68.195:47042). Oct 9 01:16:49.382771 sshd[6690]: Accepted publickey for core from 139.178.68.195 port 47042 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:49.385513 sshd[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:49.393850 systemd-logind[1482]: New session 30 of user core. Oct 9 01:16:49.401239 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 9 01:16:50.203722 sshd[6690]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:50.209591 systemd[1]: sshd@29-188.245.175.223:22-139.178.68.195:47042.service: Deactivated successfully. Oct 9 01:16:50.212003 systemd[1]: session-30.scope: Deactivated successfully. Oct 9 01:16:50.212780 systemd-logind[1482]: Session 30 logged out. Waiting for processes to exit. Oct 9 01:16:50.213789 systemd-logind[1482]: Removed session 30. Oct 9 01:16:55.403284 systemd[1]: Started sshd@30-188.245.175.223:22-139.178.68.195:55160.service - OpenSSH per-connection server daemon (139.178.68.195:55160). Oct 9 01:16:56.495135 sshd[6710]: Accepted publickey for core from 139.178.68.195 port 55160 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:16:56.498657 sshd[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:16:56.506395 systemd-logind[1482]: New session 31 of user core. Oct 9 01:16:56.510163 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 9 01:16:57.284609 sshd[6710]: pam_unix(sshd:session): session closed for user core Oct 9 01:16:57.287538 systemd[1]: sshd@30-188.245.175.223:22-139.178.68.195:55160.service: Deactivated successfully. Oct 9 01:16:57.289515 systemd[1]: session-31.scope: Deactivated successfully. Oct 9 01:16:57.290758 systemd-logind[1482]: Session 31 logged out. Waiting for processes to exit. Oct 9 01:16:57.291904 systemd-logind[1482]: Removed session 31. Oct 9 01:17:02.480412 systemd[1]: Started sshd@31-188.245.175.223:22-139.178.68.195:59762.service - OpenSSH per-connection server daemon (139.178.68.195:59762). Oct 9 01:17:03.522019 sshd[6729]: Accepted publickey for core from 139.178.68.195 port 59762 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:03.523559 sshd[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:03.527674 systemd-logind[1482]: New session 32 of user core. Oct 9 01:17:03.534138 systemd[1]: Started session-32.scope - Session 32 of User core. Oct 9 01:17:04.306745 sshd[6729]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:04.310626 systemd-logind[1482]: Session 32 logged out. Waiting for processes to exit. Oct 9 01:17:04.311410 systemd[1]: sshd@31-188.245.175.223:22-139.178.68.195:59762.service: Deactivated successfully. Oct 9 01:17:04.313684 systemd[1]: session-32.scope: Deactivated successfully. Oct 9 01:17:04.315357 systemd-logind[1482]: Removed session 32. Oct 9 01:17:09.493947 systemd[1]: Started sshd@32-188.245.175.223:22-139.178.68.195:59768.service - OpenSSH per-connection server daemon (139.178.68.195:59768). Oct 9 01:17:10.594583 sshd[6766]: Accepted publickey for core from 139.178.68.195 port 59768 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:10.596259 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:10.600717 systemd-logind[1482]: New session 33 of user core. Oct 9 01:17:10.606183 systemd[1]: Started session-33.scope - Session 33 of User core. Oct 9 01:17:11.421409 sshd[6766]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:11.425112 systemd-logind[1482]: Session 33 logged out. Waiting for processes to exit. Oct 9 01:17:11.425772 systemd[1]: sshd@32-188.245.175.223:22-139.178.68.195:59768.service: Deactivated successfully. Oct 9 01:17:11.427632 systemd[1]: session-33.scope: Deactivated successfully. Oct 9 01:17:11.428975 systemd-logind[1482]: Removed session 33. Oct 9 01:17:16.614424 systemd[1]: Started sshd@33-188.245.175.223:22-139.178.68.195:56174.service - OpenSSH per-connection server daemon (139.178.68.195:56174). Oct 9 01:17:17.709755 sshd[6803]: Accepted publickey for core from 139.178.68.195 port 56174 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:17.711752 sshd[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:17.715946 systemd-logind[1482]: New session 34 of user core. Oct 9 01:17:17.721180 systemd[1]: Started session-34.scope - Session 34 of User core. Oct 9 01:17:18.535281 sshd[6803]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:18.539865 systemd[1]: sshd@33-188.245.175.223:22-139.178.68.195:56174.service: Deactivated successfully. Oct 9 01:17:18.542556 systemd[1]: session-34.scope: Deactivated successfully. Oct 9 01:17:18.543439 systemd-logind[1482]: Session 34 logged out. Waiting for processes to exit. Oct 9 01:17:18.544915 systemd-logind[1482]: Removed session 34. Oct 9 01:17:23.729431 systemd[1]: Started sshd@34-188.245.175.223:22-139.178.68.195:41372.service - OpenSSH per-connection server daemon (139.178.68.195:41372). Oct 9 01:17:24.802118 sshd[6824]: Accepted publickey for core from 139.178.68.195 port 41372 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:24.804076 sshd[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:24.808727 systemd-logind[1482]: New session 35 of user core. Oct 9 01:17:24.814160 systemd[1]: Started session-35.scope - Session 35 of User core. Oct 9 01:17:25.615409 sshd[6824]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:25.619364 systemd[1]: sshd@34-188.245.175.223:22-139.178.68.195:41372.service: Deactivated successfully. Oct 9 01:17:25.621373 systemd[1]: session-35.scope: Deactivated successfully. Oct 9 01:17:25.622017 systemd-logind[1482]: Session 35 logged out. Waiting for processes to exit. Oct 9 01:17:25.623218 systemd-logind[1482]: Removed session 35. Oct 9 01:17:30.787275 systemd[1]: Started sshd@35-188.245.175.223:22-139.178.68.195:37102.service - OpenSSH per-connection server daemon (139.178.68.195:37102). Oct 9 01:17:31.822579 sshd[6838]: Accepted publickey for core from 139.178.68.195 port 37102 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:31.824487 sshd[6838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:31.829283 systemd-logind[1482]: New session 36 of user core. Oct 9 01:17:31.835198 systemd[1]: Started session-36.scope - Session 36 of User core. Oct 9 01:17:32.667834 sshd[6838]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:32.675812 systemd[1]: sshd@35-188.245.175.223:22-139.178.68.195:37102.service: Deactivated successfully. Oct 9 01:17:32.678104 systemd[1]: session-36.scope: Deactivated successfully. Oct 9 01:17:32.678974 systemd-logind[1482]: Session 36 logged out. Waiting for processes to exit. Oct 9 01:17:32.680750 systemd-logind[1482]: Removed session 36. Oct 9 01:17:34.980152 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.UKUaBo.mount: Deactivated successfully. Oct 9 01:17:37.858269 systemd[1]: Started sshd@36-188.245.175.223:22-139.178.68.195:37112.service - OpenSSH per-connection server daemon (139.178.68.195:37112). Oct 9 01:17:38.958376 sshd[6897]: Accepted publickey for core from 139.178.68.195 port 37112 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:38.961320 sshd[6897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:38.965854 systemd-logind[1482]: New session 37 of user core. Oct 9 01:17:38.969211 systemd[1]: Started session-37.scope - Session 37 of User core. Oct 9 01:17:39.772017 sshd[6897]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:39.775076 systemd[1]: sshd@36-188.245.175.223:22-139.178.68.195:37112.service: Deactivated successfully. Oct 9 01:17:39.777251 systemd[1]: session-37.scope: Deactivated successfully. Oct 9 01:17:39.779232 systemd-logind[1482]: Session 37 logged out. Waiting for processes to exit. Oct 9 01:17:39.780401 systemd-logind[1482]: Removed session 37. Oct 9 01:17:44.951051 systemd[1]: Started sshd@37-188.245.175.223:22-139.178.68.195:55332.service - OpenSSH per-connection server daemon (139.178.68.195:55332). Oct 9 01:17:46.020132 sshd[6937]: Accepted publickey for core from 139.178.68.195 port 55332 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:46.022121 sshd[6937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:46.027460 systemd-logind[1482]: New session 38 of user core. Oct 9 01:17:46.034196 systemd[1]: Started session-38.scope - Session 38 of User core. Oct 9 01:17:46.818762 sshd[6937]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:46.821464 systemd[1]: sshd@37-188.245.175.223:22-139.178.68.195:55332.service: Deactivated successfully. Oct 9 01:17:46.823442 systemd[1]: session-38.scope: Deactivated successfully. Oct 9 01:17:46.824670 systemd-logind[1482]: Session 38 logged out. Waiting for processes to exit. Oct 9 01:17:46.826048 systemd-logind[1482]: Removed session 38. Oct 9 01:17:52.017273 systemd[1]: Started sshd@38-188.245.175.223:22-139.178.68.195:57274.service - OpenSSH per-connection server daemon (139.178.68.195:57274). Oct 9 01:17:53.153255 sshd[6952]: Accepted publickey for core from 139.178.68.195 port 57274 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:17:53.155296 sshd[6952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:17:53.163310 systemd-logind[1482]: New session 39 of user core. Oct 9 01:17:53.170188 systemd[1]: Started session-39.scope - Session 39 of User core. Oct 9 01:17:53.990016 sshd[6952]: pam_unix(sshd:session): session closed for user core Oct 9 01:17:53.993843 systemd[1]: sshd@38-188.245.175.223:22-139.178.68.195:57274.service: Deactivated successfully. Oct 9 01:17:53.996432 systemd[1]: session-39.scope: Deactivated successfully. Oct 9 01:17:53.997218 systemd-logind[1482]: Session 39 logged out. Waiting for processes to exit. Oct 9 01:17:53.998321 systemd-logind[1482]: Removed session 39. Oct 9 01:17:59.178291 systemd[1]: Started sshd@39-188.245.175.223:22-139.178.68.195:57290.service - OpenSSH per-connection server daemon (139.178.68.195:57290). Oct 9 01:18:00.687155 sshd[6975]: Accepted publickey for core from 139.178.68.195 port 57290 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:00.689082 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:00.695374 systemd-logind[1482]: New session 40 of user core. Oct 9 01:18:00.701193 systemd[1]: Started session-40.scope - Session 40 of User core. Oct 9 01:18:01.444705 sshd[6975]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:01.449536 systemd[1]: sshd@39-188.245.175.223:22-139.178.68.195:57290.service: Deactivated successfully. Oct 9 01:18:01.451447 systemd[1]: session-40.scope: Deactivated successfully. Oct 9 01:18:01.452058 systemd-logind[1482]: Session 40 logged out. Waiting for processes to exit. Oct 9 01:18:01.452949 systemd-logind[1482]: Removed session 40. Oct 9 01:18:06.635732 systemd[1]: Started sshd@40-188.245.175.223:22-139.178.68.195:58534.service - OpenSSH per-connection server daemon (139.178.68.195:58534). Oct 9 01:18:07.695408 sshd[7031]: Accepted publickey for core from 139.178.68.195 port 58534 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:07.698155 sshd[7031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:07.702944 systemd-logind[1482]: New session 41 of user core. Oct 9 01:18:07.707173 systemd[1]: Started session-41.scope - Session 41 of User core. Oct 9 01:18:08.504203 sshd[7031]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:08.507492 systemd[1]: sshd@40-188.245.175.223:22-139.178.68.195:58534.service: Deactivated successfully. Oct 9 01:18:08.509839 systemd[1]: session-41.scope: Deactivated successfully. Oct 9 01:18:08.511368 systemd-logind[1482]: Session 41 logged out. Waiting for processes to exit. Oct 9 01:18:08.512708 systemd-logind[1482]: Removed session 41. Oct 9 01:18:13.690731 systemd[1]: Started sshd@41-188.245.175.223:22-139.178.68.195:48616.service - OpenSSH per-connection server daemon (139.178.68.195:48616). Oct 9 01:18:14.670531 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.bIatWr.mount: Deactivated successfully. Oct 9 01:18:14.738488 sshd[7047]: Accepted publickey for core from 139.178.68.195 port 48616 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:14.739917 sshd[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:14.743920 systemd-logind[1482]: New session 42 of user core. Oct 9 01:18:14.748149 systemd[1]: Started session-42.scope - Session 42 of User core. Oct 9 01:18:15.582117 sshd[7047]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:15.588616 systemd-logind[1482]: Session 42 logged out. Waiting for processes to exit. Oct 9 01:18:15.589334 systemd[1]: sshd@41-188.245.175.223:22-139.178.68.195:48616.service: Deactivated successfully. Oct 9 01:18:15.591568 systemd[1]: session-42.scope: Deactivated successfully. Oct 9 01:18:15.592543 systemd-logind[1482]: Removed session 42. Oct 9 01:18:20.768737 systemd[1]: Started sshd@42-188.245.175.223:22-139.178.68.195:48622.service - OpenSSH per-connection server daemon (139.178.68.195:48622). Oct 9 01:18:21.867760 sshd[7086]: Accepted publickey for core from 139.178.68.195 port 48622 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:21.870512 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:21.876417 systemd-logind[1482]: New session 43 of user core. Oct 9 01:18:21.880169 systemd[1]: Started session-43.scope - Session 43 of User core. Oct 9 01:18:22.675593 sshd[7086]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:22.680144 systemd[1]: sshd@42-188.245.175.223:22-139.178.68.195:48622.service: Deactivated successfully. Oct 9 01:18:22.683518 systemd[1]: session-43.scope: Deactivated successfully. Oct 9 01:18:22.685290 systemd-logind[1482]: Session 43 logged out. Waiting for processes to exit. Oct 9 01:18:22.686603 systemd-logind[1482]: Removed session 43. Oct 9 01:18:22.872311 systemd[1]: Started sshd@43-188.245.175.223:22-139.178.68.195:50356.service - OpenSSH per-connection server daemon (139.178.68.195:50356). Oct 9 01:18:23.913467 sshd[7100]: Accepted publickey for core from 139.178.68.195 port 50356 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:23.915902 sshd[7100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:23.921206 systemd-logind[1482]: New session 44 of user core. Oct 9 01:18:23.927226 systemd[1]: Started session-44.scope - Session 44 of User core. Oct 9 01:18:24.717289 sshd[7100]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:24.724853 systemd[1]: sshd@43-188.245.175.223:22-139.178.68.195:50356.service: Deactivated successfully. Oct 9 01:18:24.728612 systemd[1]: session-44.scope: Deactivated successfully. Oct 9 01:18:24.731121 systemd-logind[1482]: Session 44 logged out. Waiting for processes to exit. Oct 9 01:18:24.732619 systemd-logind[1482]: Removed session 44. Oct 9 01:18:24.901953 systemd[1]: Started sshd@44-188.245.175.223:22-139.178.68.195:50368.service - OpenSSH per-connection server daemon (139.178.68.195:50368). Oct 9 01:18:26.009495 sshd[7113]: Accepted publickey for core from 139.178.68.195 port 50368 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:26.012235 sshd[7113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:26.017988 systemd-logind[1482]: New session 45 of user core. Oct 9 01:18:26.023201 systemd[1]: Started session-45.scope - Session 45 of User core. Oct 9 01:18:26.807732 sshd[7113]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:26.811895 systemd[1]: sshd@44-188.245.175.223:22-139.178.68.195:50368.service: Deactivated successfully. Oct 9 01:18:26.815423 systemd[1]: session-45.scope: Deactivated successfully. Oct 9 01:18:26.817101 systemd-logind[1482]: Session 45 logged out. Waiting for processes to exit. Oct 9 01:18:26.818405 systemd-logind[1482]: Removed session 45. Oct 9 01:18:32.007787 systemd[1]: Started sshd@45-188.245.175.223:22-139.178.68.195:51314.service - OpenSSH per-connection server daemon (139.178.68.195:51314). Oct 9 01:18:33.045932 sshd[7130]: Accepted publickey for core from 139.178.68.195 port 51314 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:33.048700 sshd[7130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:33.055985 systemd-logind[1482]: New session 46 of user core. Oct 9 01:18:33.062273 systemd[1]: Started session-46.scope - Session 46 of User core. Oct 9 01:18:33.842523 sshd[7130]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:33.848425 systemd[1]: sshd@45-188.245.175.223:22-139.178.68.195:51314.service: Deactivated successfully. Oct 9 01:18:33.852457 systemd[1]: session-46.scope: Deactivated successfully. Oct 9 01:18:33.853713 systemd-logind[1482]: Session 46 logged out. Waiting for processes to exit. Oct 9 01:18:33.855549 systemd-logind[1482]: Removed session 46. Oct 9 01:18:35.024010 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.rWNdhO.mount: Deactivated successfully. Oct 9 01:18:39.018293 systemd[1]: Started sshd@46-188.245.175.223:22-139.178.68.195:51322.service - OpenSSH per-connection server daemon (139.178.68.195:51322). Oct 9 01:18:40.021636 sshd[7197]: Accepted publickey for core from 139.178.68.195 port 51322 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:40.023430 sshd[7197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:40.028364 systemd-logind[1482]: New session 47 of user core. Oct 9 01:18:40.033171 systemd[1]: Started session-47.scope - Session 47 of User core. Oct 9 01:18:40.778648 sshd[7197]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:40.782313 systemd-logind[1482]: Session 47 logged out. Waiting for processes to exit. Oct 9 01:18:40.782566 systemd[1]: sshd@46-188.245.175.223:22-139.178.68.195:51322.service: Deactivated successfully. Oct 9 01:18:40.784581 systemd[1]: session-47.scope: Deactivated successfully. Oct 9 01:18:40.785550 systemd-logind[1482]: Removed session 47. Oct 9 01:18:45.956351 systemd[1]: Started sshd@47-188.245.175.223:22-139.178.68.195:53414.service - OpenSSH per-connection server daemon (139.178.68.195:53414). Oct 9 01:18:46.952046 sshd[7230]: Accepted publickey for core from 139.178.68.195 port 53414 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:46.953762 sshd[7230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:46.959084 systemd-logind[1482]: New session 48 of user core. Oct 9 01:18:46.965148 systemd[1]: Started session-48.scope - Session 48 of User core. Oct 9 01:18:47.740623 sshd[7230]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:47.744797 systemd[1]: sshd@47-188.245.175.223:22-139.178.68.195:53414.service: Deactivated successfully. Oct 9 01:18:47.746653 systemd[1]: session-48.scope: Deactivated successfully. Oct 9 01:18:47.747582 systemd-logind[1482]: Session 48 logged out. Waiting for processes to exit. Oct 9 01:18:47.748826 systemd-logind[1482]: Removed session 48. Oct 9 01:18:52.938672 systemd[1]: Started sshd@48-188.245.175.223:22-139.178.68.195:43494.service - OpenSSH per-connection server daemon (139.178.68.195:43494). Oct 9 01:18:54.002801 sshd[7248]: Accepted publickey for core from 139.178.68.195 port 43494 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:18:54.004549 sshd[7248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:18:54.009428 systemd-logind[1482]: New session 49 of user core. Oct 9 01:18:54.018235 systemd[1]: Started session-49.scope - Session 49 of User core. Oct 9 01:18:54.818636 sshd[7248]: pam_unix(sshd:session): session closed for user core Oct 9 01:18:54.823187 systemd-logind[1482]: Session 49 logged out. Waiting for processes to exit. Oct 9 01:18:54.825715 systemd[1]: sshd@48-188.245.175.223:22-139.178.68.195:43494.service: Deactivated successfully. Oct 9 01:18:54.829015 systemd[1]: session-49.scope: Deactivated successfully. Oct 9 01:18:54.830425 systemd-logind[1482]: Removed session 49. Oct 9 01:19:00.008247 systemd[1]: Started sshd@49-188.245.175.223:22-139.178.68.195:43506.service - OpenSSH per-connection server daemon (139.178.68.195:43506). Oct 9 01:19:01.104742 sshd[7268]: Accepted publickey for core from 139.178.68.195 port 43506 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:01.106472 sshd[7268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:01.111095 systemd-logind[1482]: New session 50 of user core. Oct 9 01:19:01.116197 systemd[1]: Started session-50.scope - Session 50 of User core. Oct 9 01:19:01.957881 sshd[7268]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:01.962350 systemd-logind[1482]: Session 50 logged out. Waiting for processes to exit. Oct 9 01:19:01.963207 systemd[1]: sshd@49-188.245.175.223:22-139.178.68.195:43506.service: Deactivated successfully. Oct 9 01:19:01.965204 systemd[1]: session-50.scope: Deactivated successfully. Oct 9 01:19:01.966412 systemd-logind[1482]: Removed session 50. Oct 9 01:19:07.131926 systemd[1]: Started sshd@50-188.245.175.223:22-139.178.68.195:40060.service - OpenSSH per-connection server daemon (139.178.68.195:40060). Oct 9 01:19:08.122554 sshd[7307]: Accepted publickey for core from 139.178.68.195 port 40060 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:08.125588 sshd[7307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:08.130877 systemd-logind[1482]: New session 51 of user core. Oct 9 01:19:08.137183 systemd[1]: Started session-51.scope - Session 51 of User core. Oct 9 01:19:08.904249 sshd[7307]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:08.908639 systemd[1]: sshd@50-188.245.175.223:22-139.178.68.195:40060.service: Deactivated successfully. Oct 9 01:19:08.911047 systemd[1]: session-51.scope: Deactivated successfully. Oct 9 01:19:08.911787 systemd-logind[1482]: Session 51 logged out. Waiting for processes to exit. Oct 9 01:19:08.912910 systemd-logind[1482]: Removed session 51. Oct 9 01:19:14.090550 systemd[1]: Started sshd@51-188.245.175.223:22-139.178.68.195:58440.service - OpenSSH per-connection server daemon (139.178.68.195:58440). Oct 9 01:19:15.226365 sshd[7325]: Accepted publickey for core from 139.178.68.195 port 58440 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:15.230395 sshd[7325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:15.235379 systemd-logind[1482]: New session 52 of user core. Oct 9 01:19:15.240204 systemd[1]: Started session-52.scope - Session 52 of User core. Oct 9 01:19:16.083904 sshd[7325]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:16.087007 systemd[1]: sshd@51-188.245.175.223:22-139.178.68.195:58440.service: Deactivated successfully. Oct 9 01:19:16.089523 systemd[1]: session-52.scope: Deactivated successfully. Oct 9 01:19:16.090987 systemd-logind[1482]: Session 52 logged out. Waiting for processes to exit. Oct 9 01:19:16.093113 systemd-logind[1482]: Removed session 52. Oct 9 01:19:21.267573 systemd[1]: Started sshd@52-188.245.175.223:22-139.178.68.195:33490.service - OpenSSH per-connection server daemon (139.178.68.195:33490). Oct 9 01:19:22.296598 sshd[7362]: Accepted publickey for core from 139.178.68.195 port 33490 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:22.298136 sshd[7362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:22.302082 systemd-logind[1482]: New session 53 of user core. Oct 9 01:19:22.306148 systemd[1]: Started session-53.scope - Session 53 of User core. Oct 9 01:19:23.060872 sshd[7362]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:23.069411 systemd-logind[1482]: Session 53 logged out. Waiting for processes to exit. Oct 9 01:19:23.070096 systemd[1]: sshd@52-188.245.175.223:22-139.178.68.195:33490.service: Deactivated successfully. Oct 9 01:19:23.072612 systemd[1]: session-53.scope: Deactivated successfully. Oct 9 01:19:23.073927 systemd-logind[1482]: Removed session 53. Oct 9 01:19:28.241808 systemd[1]: Started sshd@53-188.245.175.223:22-139.178.68.195:33498.service - OpenSSH per-connection server daemon (139.178.68.195:33498). Oct 9 01:19:29.312586 sshd[7381]: Accepted publickey for core from 139.178.68.195 port 33498 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:29.314340 sshd[7381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:29.321622 systemd-logind[1482]: New session 54 of user core. Oct 9 01:19:29.327227 systemd[1]: Started session-54.scope - Session 54 of User core. Oct 9 01:19:30.114624 sshd[7381]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:30.118702 systemd[1]: sshd@53-188.245.175.223:22-139.178.68.195:33498.service: Deactivated successfully. Oct 9 01:19:30.121319 systemd[1]: session-54.scope: Deactivated successfully. Oct 9 01:19:30.122850 systemd-logind[1482]: Session 54 logged out. Waiting for processes to exit. Oct 9 01:19:30.123901 systemd-logind[1482]: Removed session 54. Oct 9 01:19:35.300286 systemd[1]: Started sshd@54-188.245.175.223:22-139.178.68.195:34994.service - OpenSSH per-connection server daemon (139.178.68.195:34994). Oct 9 01:19:36.338405 sshd[7434]: Accepted publickey for core from 139.178.68.195 port 34994 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:36.340373 sshd[7434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:36.345227 systemd-logind[1482]: New session 55 of user core. Oct 9 01:19:36.353160 systemd[1]: Started session-55.scope - Session 55 of User core. Oct 9 01:19:37.115283 sshd[7434]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:37.118547 systemd[1]: sshd@54-188.245.175.223:22-139.178.68.195:34994.service: Deactivated successfully. Oct 9 01:19:37.121145 systemd[1]: session-55.scope: Deactivated successfully. Oct 9 01:19:37.123326 systemd-logind[1482]: Session 55 logged out. Waiting for processes to exit. Oct 9 01:19:37.124975 systemd-logind[1482]: Removed session 55. Oct 9 01:19:42.298394 systemd[1]: Started sshd@55-188.245.175.223:22-139.178.68.195:48162.service - OpenSSH per-connection server daemon (139.178.68.195:48162). Oct 9 01:19:43.392092 sshd[7466]: Accepted publickey for core from 139.178.68.195 port 48162 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:43.393961 sshd[7466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:43.399914 systemd-logind[1482]: New session 56 of user core. Oct 9 01:19:43.403380 systemd[1]: Started session-56.scope - Session 56 of User core. Oct 9 01:19:44.269610 sshd[7466]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:44.274136 systemd-logind[1482]: Session 56 logged out. Waiting for processes to exit. Oct 9 01:19:44.274895 systemd[1]: sshd@55-188.245.175.223:22-139.178.68.195:48162.service: Deactivated successfully. Oct 9 01:19:44.277193 systemd[1]: session-56.scope: Deactivated successfully. Oct 9 01:19:44.278594 systemd-logind[1482]: Removed session 56. Oct 9 01:19:49.455218 systemd[1]: Started sshd@56-188.245.175.223:22-139.178.68.195:48166.service - OpenSSH per-connection server daemon (139.178.68.195:48166). Oct 9 01:19:50.533240 sshd[7505]: Accepted publickey for core from 139.178.68.195 port 48166 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:50.534863 sshd[7505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:50.539601 systemd-logind[1482]: New session 57 of user core. Oct 9 01:19:50.544172 systemd[1]: Started session-57.scope - Session 57 of User core. Oct 9 01:19:51.337070 sshd[7505]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:51.340620 systemd[1]: sshd@56-188.245.175.223:22-139.178.68.195:48166.service: Deactivated successfully. Oct 9 01:19:51.342471 systemd[1]: session-57.scope: Deactivated successfully. Oct 9 01:19:51.343193 systemd-logind[1482]: Session 57 logged out. Waiting for processes to exit. Oct 9 01:19:51.344290 systemd-logind[1482]: Removed session 57. Oct 9 01:19:56.527635 systemd[1]: Started sshd@57-188.245.175.223:22-139.178.68.195:42552.service - OpenSSH per-connection server daemon (139.178.68.195:42552). Oct 9 01:19:57.623761 sshd[7518]: Accepted publickey for core from 139.178.68.195 port 42552 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:19:57.626712 sshd[7518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:19:57.635112 systemd-logind[1482]: New session 58 of user core. Oct 9 01:19:57.645261 systemd[1]: Started session-58.scope - Session 58 of User core. Oct 9 01:19:58.444416 sshd[7518]: pam_unix(sshd:session): session closed for user core Oct 9 01:19:58.447587 systemd[1]: sshd@57-188.245.175.223:22-139.178.68.195:42552.service: Deactivated successfully. Oct 9 01:19:58.449745 systemd[1]: session-58.scope: Deactivated successfully. Oct 9 01:19:58.451381 systemd-logind[1482]: Session 58 logged out. Waiting for processes to exit. Oct 9 01:19:58.452695 systemd-logind[1482]: Removed session 58. Oct 9 01:20:03.638273 systemd[1]: Started sshd@58-188.245.175.223:22-139.178.68.195:60904.service - OpenSSH per-connection server daemon (139.178.68.195:60904). Oct 9 01:20:04.734136 sshd[7537]: Accepted publickey for core from 139.178.68.195 port 60904 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:04.735769 sshd[7537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:04.740802 systemd-logind[1482]: New session 59 of user core. Oct 9 01:20:04.745165 systemd[1]: Started session-59.scope - Session 59 of User core. Oct 9 01:20:04.991067 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.iuCU9r.mount: Deactivated successfully. Oct 9 01:20:05.601695 sshd[7537]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:05.609002 systemd[1]: sshd@58-188.245.175.223:22-139.178.68.195:60904.service: Deactivated successfully. Oct 9 01:20:05.611599 systemd[1]: session-59.scope: Deactivated successfully. Oct 9 01:20:05.612571 systemd-logind[1482]: Session 59 logged out. Waiting for processes to exit. Oct 9 01:20:05.613713 systemd-logind[1482]: Removed session 59. Oct 9 01:20:10.796478 systemd[1]: Started sshd@59-188.245.175.223:22-139.178.68.195:60472.service - OpenSSH per-connection server daemon (139.178.68.195:60472). Oct 9 01:20:11.836667 sshd[7580]: Accepted publickey for core from 139.178.68.195 port 60472 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:11.839377 sshd[7580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:11.844198 systemd-logind[1482]: New session 60 of user core. Oct 9 01:20:11.849179 systemd[1]: Started session-60.scope - Session 60 of User core. Oct 9 01:20:12.580779 sshd[7580]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:12.587092 systemd[1]: sshd@59-188.245.175.223:22-139.178.68.195:60472.service: Deactivated successfully. Oct 9 01:20:12.590747 systemd[1]: session-60.scope: Deactivated successfully. Oct 9 01:20:12.592520 systemd-logind[1482]: Session 60 logged out. Waiting for processes to exit. Oct 9 01:20:12.594590 systemd-logind[1482]: Removed session 60. Oct 9 01:20:17.774297 systemd[1]: Started sshd@60-188.245.175.223:22-139.178.68.195:60480.service - OpenSSH per-connection server daemon (139.178.68.195:60480). Oct 9 01:20:18.854929 sshd[7612]: Accepted publickey for core from 139.178.68.195 port 60480 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:18.857368 sshd[7612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:18.864695 systemd-logind[1482]: New session 61 of user core. Oct 9 01:20:18.870302 systemd[1]: Started session-61.scope - Session 61 of User core. Oct 9 01:20:19.674637 sshd[7612]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:19.679350 systemd[1]: sshd@60-188.245.175.223:22-139.178.68.195:60480.service: Deactivated successfully. Oct 9 01:20:19.682775 systemd[1]: session-61.scope: Deactivated successfully. Oct 9 01:20:19.683662 systemd-logind[1482]: Session 61 logged out. Waiting for processes to exit. Oct 9 01:20:19.685079 systemd-logind[1482]: Removed session 61. Oct 9 01:20:24.867253 systemd[1]: Started sshd@61-188.245.175.223:22-139.178.68.195:54038.service - OpenSSH per-connection server daemon (139.178.68.195:54038). Oct 9 01:20:25.977261 sshd[7633]: Accepted publickey for core from 139.178.68.195 port 54038 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:25.979119 sshd[7633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:25.984820 systemd-logind[1482]: New session 62 of user core. Oct 9 01:20:25.989203 systemd[1]: Started session-62.scope - Session 62 of User core. Oct 9 01:20:26.768622 sshd[7633]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:26.773344 systemd[1]: sshd@61-188.245.175.223:22-139.178.68.195:54038.service: Deactivated successfully. Oct 9 01:20:26.776503 systemd[1]: session-62.scope: Deactivated successfully. Oct 9 01:20:26.777581 systemd-logind[1482]: Session 62 logged out. Waiting for processes to exit. Oct 9 01:20:26.778800 systemd-logind[1482]: Removed session 62. Oct 9 01:20:31.970718 systemd[1]: Started sshd@62-188.245.175.223:22-139.178.68.195:54530.service - OpenSSH per-connection server daemon (139.178.68.195:54530). Oct 9 01:20:33.056619 sshd[7651]: Accepted publickey for core from 139.178.68.195 port 54530 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:33.058882 sshd[7651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:33.063885 systemd-logind[1482]: New session 63 of user core. Oct 9 01:20:33.072164 systemd[1]: Started session-63.scope - Session 63 of User core. Oct 9 01:20:33.877713 sshd[7651]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:33.882453 systemd-logind[1482]: Session 63 logged out. Waiting for processes to exit. Oct 9 01:20:33.883437 systemd[1]: sshd@62-188.245.175.223:22-139.178.68.195:54530.service: Deactivated successfully. Oct 9 01:20:33.885936 systemd[1]: session-63.scope: Deactivated successfully. Oct 9 01:20:33.887403 systemd-logind[1482]: Removed session 63. Oct 9 01:20:39.063722 systemd[1]: Started sshd@63-188.245.175.223:22-139.178.68.195:54534.service - OpenSSH per-connection server daemon (139.178.68.195:54534). Oct 9 01:20:40.211269 sshd[7707]: Accepted publickey for core from 139.178.68.195 port 54534 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:40.214315 sshd[7707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:40.219926 systemd-logind[1482]: New session 64 of user core. Oct 9 01:20:40.227204 systemd[1]: Started session-64.scope - Session 64 of User core. Oct 9 01:20:41.067159 sshd[7707]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:41.070370 systemd[1]: sshd@63-188.245.175.223:22-139.178.68.195:54534.service: Deactivated successfully. Oct 9 01:20:41.072772 systemd[1]: session-64.scope: Deactivated successfully. Oct 9 01:20:41.074841 systemd-logind[1482]: Session 64 logged out. Waiting for processes to exit. Oct 9 01:20:41.075932 systemd-logind[1482]: Removed session 64. Oct 9 01:20:46.257370 systemd[1]: Started sshd@64-188.245.175.223:22-139.178.68.195:35948.service - OpenSSH per-connection server daemon (139.178.68.195:35948). Oct 9 01:20:47.370280 sshd[7745]: Accepted publickey for core from 139.178.68.195 port 35948 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:47.373588 sshd[7745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:47.378348 systemd-logind[1482]: New session 65 of user core. Oct 9 01:20:47.385148 systemd[1]: Started session-65.scope - Session 65 of User core. Oct 9 01:20:48.238547 sshd[7745]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:48.241837 systemd-logind[1482]: Session 65 logged out. Waiting for processes to exit. Oct 9 01:20:48.242555 systemd[1]: sshd@64-188.245.175.223:22-139.178.68.195:35948.service: Deactivated successfully. Oct 9 01:20:48.244382 systemd[1]: session-65.scope: Deactivated successfully. Oct 9 01:20:48.245747 systemd-logind[1482]: Removed session 65. Oct 9 01:20:53.431308 systemd[1]: Started sshd@65-188.245.175.223:22-139.178.68.195:46748.service - OpenSSH per-connection server daemon (139.178.68.195:46748). Oct 9 01:20:54.535701 sshd[7769]: Accepted publickey for core from 139.178.68.195 port 46748 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:20:54.537538 sshd[7769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:20:54.542205 systemd-logind[1482]: New session 66 of user core. Oct 9 01:20:54.546227 systemd[1]: Started session-66.scope - Session 66 of User core. Oct 9 01:20:55.343297 sshd[7769]: pam_unix(sshd:session): session closed for user core Oct 9 01:20:55.347091 systemd-logind[1482]: Session 66 logged out. Waiting for processes to exit. Oct 9 01:20:55.347727 systemd[1]: sshd@65-188.245.175.223:22-139.178.68.195:46748.service: Deactivated successfully. Oct 9 01:20:55.349786 systemd[1]: session-66.scope: Deactivated successfully. Oct 9 01:20:55.350938 systemd-logind[1482]: Removed session 66. Oct 9 01:21:00.538268 systemd[1]: Started sshd@66-188.245.175.223:22-139.178.68.195:46758.service - OpenSSH per-connection server daemon (139.178.68.195:46758). Oct 9 01:21:01.642563 sshd[7782]: Accepted publickey for core from 139.178.68.195 port 46758 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:01.644338 sshd[7782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:01.649700 systemd-logind[1482]: New session 67 of user core. Oct 9 01:21:01.653265 systemd[1]: Started session-67.scope - Session 67 of User core. Oct 9 01:21:02.471397 sshd[7782]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:02.474811 systemd[1]: sshd@66-188.245.175.223:22-139.178.68.195:46758.service: Deactivated successfully. Oct 9 01:21:02.477527 systemd[1]: session-67.scope: Deactivated successfully. Oct 9 01:21:02.479176 systemd-logind[1482]: Session 67 logged out. Waiting for processes to exit. Oct 9 01:21:02.480772 systemd-logind[1482]: Removed session 67. Oct 9 01:21:07.673317 systemd[1]: Started sshd@67-188.245.175.223:22-139.178.68.195:55322.service - OpenSSH per-connection server daemon (139.178.68.195:55322). Oct 9 01:21:08.788803 sshd[7821]: Accepted publickey for core from 139.178.68.195 port 55322 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:08.790302 sshd[7821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:08.793936 systemd-logind[1482]: New session 68 of user core. Oct 9 01:21:08.800133 systemd[1]: Started session-68.scope - Session 68 of User core. Oct 9 01:21:09.651681 sshd[7821]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:09.657299 systemd[1]: sshd@67-188.245.175.223:22-139.178.68.195:55322.service: Deactivated successfully. Oct 9 01:21:09.660252 systemd[1]: session-68.scope: Deactivated successfully. Oct 9 01:21:09.660964 systemd-logind[1482]: Session 68 logged out. Waiting for processes to exit. Oct 9 01:21:09.662237 systemd-logind[1482]: Removed session 68. Oct 9 01:21:14.670827 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.xjqsGZ.mount: Deactivated successfully. Oct 9 01:21:14.843180 systemd[1]: Started sshd@68-188.245.175.223:22-139.178.68.195:42508.service - OpenSSH per-connection server daemon (139.178.68.195:42508). Oct 9 01:21:15.844544 sshd[7872]: Accepted publickey for core from 139.178.68.195 port 42508 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:15.847100 sshd[7872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:15.851843 systemd-logind[1482]: New session 69 of user core. Oct 9 01:21:15.857204 systemd[1]: Started session-69.scope - Session 69 of User core. Oct 9 01:21:16.604326 sshd[7872]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:16.609943 systemd[1]: sshd@68-188.245.175.223:22-139.178.68.195:42508.service: Deactivated successfully. Oct 9 01:21:16.612167 systemd[1]: session-69.scope: Deactivated successfully. Oct 9 01:21:16.613584 systemd-logind[1482]: Session 69 logged out. Waiting for processes to exit. Oct 9 01:21:16.614731 systemd-logind[1482]: Removed session 69. Oct 9 01:21:21.788266 systemd[1]: Started sshd@69-188.245.175.223:22-139.178.68.195:59584.service - OpenSSH per-connection server daemon (139.178.68.195:59584). Oct 9 01:21:22.857086 sshd[7892]: Accepted publickey for core from 139.178.68.195 port 59584 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:22.858681 sshd[7892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:22.864840 systemd-logind[1482]: New session 70 of user core. Oct 9 01:21:22.869142 systemd[1]: Started session-70.scope - Session 70 of User core. Oct 9 01:21:23.685365 sshd[7892]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:23.689628 systemd[1]: sshd@69-188.245.175.223:22-139.178.68.195:59584.service: Deactivated successfully. Oct 9 01:21:23.692803 systemd[1]: session-70.scope: Deactivated successfully. Oct 9 01:21:23.694869 systemd-logind[1482]: Session 70 logged out. Waiting for processes to exit. Oct 9 01:21:23.696323 systemd-logind[1482]: Removed session 70. Oct 9 01:21:28.867970 systemd[1]: Started sshd@70-188.245.175.223:22-139.178.68.195:59596.service - OpenSSH per-connection server daemon (139.178.68.195:59596). Oct 9 01:21:29.976396 sshd[7907]: Accepted publickey for core from 139.178.68.195 port 59596 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:29.978479 sshd[7907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:29.982950 systemd-logind[1482]: New session 71 of user core. Oct 9 01:21:29.988171 systemd[1]: Started session-71.scope - Session 71 of User core. Oct 9 01:21:30.867233 sshd[7907]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:30.871519 systemd[1]: sshd@70-188.245.175.223:22-139.178.68.195:59596.service: Deactivated successfully. Oct 9 01:21:30.873251 systemd[1]: session-71.scope: Deactivated successfully. Oct 9 01:21:30.874512 systemd-logind[1482]: Session 71 logged out. Waiting for processes to exit. Oct 9 01:21:30.876157 systemd-logind[1482]: Removed session 71. Oct 9 01:21:36.043608 systemd[1]: Started sshd@71-188.245.175.223:22-139.178.68.195:33060.service - OpenSSH per-connection server daemon (139.178.68.195:33060). Oct 9 01:21:37.161482 sshd[7968]: Accepted publickey for core from 139.178.68.195 port 33060 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:37.164457 sshd[7968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:37.170458 systemd-logind[1482]: New session 72 of user core. Oct 9 01:21:37.176200 systemd[1]: Started session-72.scope - Session 72 of User core. Oct 9 01:21:38.089276 sshd[7968]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:38.093437 systemd[1]: sshd@71-188.245.175.223:22-139.178.68.195:33060.service: Deactivated successfully. Oct 9 01:21:38.095555 systemd[1]: session-72.scope: Deactivated successfully. Oct 9 01:21:38.096389 systemd-logind[1482]: Session 72 logged out. Waiting for processes to exit. Oct 9 01:21:38.097550 systemd-logind[1482]: Removed session 72. Oct 9 01:21:43.272317 systemd[1]: Started sshd@72-188.245.175.223:22-139.178.68.195:51854.service - OpenSSH per-connection server daemon (139.178.68.195:51854). Oct 9 01:21:44.372333 sshd[7993]: Accepted publickey for core from 139.178.68.195 port 51854 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:44.374246 sshd[7993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:44.379354 systemd-logind[1482]: New session 73 of user core. Oct 9 01:21:44.385390 systemd[1]: Started session-73.scope - Session 73 of User core. Oct 9 01:21:44.677853 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.HinuiW.mount: Deactivated successfully. Oct 9 01:21:45.190466 sshd[7993]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:45.194163 systemd[1]: sshd@72-188.245.175.223:22-139.178.68.195:51854.service: Deactivated successfully. Oct 9 01:21:45.196388 systemd[1]: session-73.scope: Deactivated successfully. Oct 9 01:21:45.197155 systemd-logind[1482]: Session 73 logged out. Waiting for processes to exit. Oct 9 01:21:45.198189 systemd-logind[1482]: Removed session 73. Oct 9 01:21:50.365128 systemd[1]: Started sshd@73-188.245.175.223:22-139.178.68.195:51864.service - OpenSSH per-connection server daemon (139.178.68.195:51864). Oct 9 01:21:51.454402 sshd[8026]: Accepted publickey for core from 139.178.68.195 port 51864 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:51.456003 sshd[8026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:51.460778 systemd-logind[1482]: New session 74 of user core. Oct 9 01:21:51.468149 systemd[1]: Started session-74.scope - Session 74 of User core. Oct 9 01:21:52.293157 sshd[8026]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:52.297284 systemd[1]: sshd@73-188.245.175.223:22-139.178.68.195:51864.service: Deactivated successfully. Oct 9 01:21:52.299250 systemd[1]: session-74.scope: Deactivated successfully. Oct 9 01:21:52.299993 systemd-logind[1482]: Session 74 logged out. Waiting for processes to exit. Oct 9 01:21:52.301010 systemd-logind[1482]: Removed session 74. Oct 9 01:21:57.481299 systemd[1]: Started sshd@74-188.245.175.223:22-139.178.68.195:52862.service - OpenSSH per-connection server daemon (139.178.68.195:52862). Oct 9 01:21:58.499583 sshd[8044]: Accepted publickey for core from 139.178.68.195 port 52862 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:21:58.501494 sshd[8044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:21:58.506127 systemd-logind[1482]: New session 75 of user core. Oct 9 01:21:58.510191 systemd[1]: Started session-75.scope - Session 75 of User core. Oct 9 01:21:59.287126 sshd[8044]: pam_unix(sshd:session): session closed for user core Oct 9 01:21:59.290616 systemd[1]: sshd@74-188.245.175.223:22-139.178.68.195:52862.service: Deactivated successfully. Oct 9 01:21:59.293012 systemd[1]: session-75.scope: Deactivated successfully. Oct 9 01:21:59.295050 systemd-logind[1482]: Session 75 logged out. Waiting for processes to exit. Oct 9 01:21:59.296527 systemd-logind[1482]: Removed session 75. Oct 9 01:22:04.464274 systemd[1]: Started sshd@75-188.245.175.223:22-139.178.68.195:50782.service - OpenSSH per-connection server daemon (139.178.68.195:50782). Oct 9 01:22:04.996433 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.hFYw5W.mount: Deactivated successfully. Oct 9 01:22:05.522410 sshd[8062]: Accepted publickey for core from 139.178.68.195 port 50782 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:05.524743 sshd[8062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:05.530164 systemd-logind[1482]: New session 76 of user core. Oct 9 01:22:05.535490 systemd[1]: Started session-76.scope - Session 76 of User core. Oct 9 01:22:06.343866 sshd[8062]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:06.347347 systemd-logind[1482]: Session 76 logged out. Waiting for processes to exit. Oct 9 01:22:06.347671 systemd[1]: sshd@75-188.245.175.223:22-139.178.68.195:50782.service: Deactivated successfully. Oct 9 01:22:06.349713 systemd[1]: session-76.scope: Deactivated successfully. Oct 9 01:22:06.350878 systemd-logind[1482]: Removed session 76. Oct 9 01:22:11.537623 systemd[1]: Started sshd@76-188.245.175.223:22-139.178.68.195:53940.service - OpenSSH per-connection server daemon (139.178.68.195:53940). Oct 9 01:22:12.709164 sshd[8101]: Accepted publickey for core from 139.178.68.195 port 53940 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:12.714235 sshd[8101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:12.723226 systemd-logind[1482]: New session 77 of user core. Oct 9 01:22:12.729281 systemd[1]: Started session-77.scope - Session 77 of User core. Oct 9 01:22:13.698255 sshd[8101]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:13.702444 systemd-logind[1482]: Session 77 logged out. Waiting for processes to exit. Oct 9 01:22:13.703420 systemd[1]: sshd@76-188.245.175.223:22-139.178.68.195:53940.service: Deactivated successfully. Oct 9 01:22:13.706847 systemd[1]: session-77.scope: Deactivated successfully. Oct 9 01:22:13.709934 systemd-logind[1482]: Removed session 77. Oct 9 01:22:14.689916 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.NQwHf3.mount: Deactivated successfully. Oct 9 01:22:18.895564 systemd[1]: Started sshd@77-188.245.175.223:22-139.178.68.195:53954.service - OpenSSH per-connection server daemon (139.178.68.195:53954). Oct 9 01:22:19.968133 sshd[8137]: Accepted publickey for core from 139.178.68.195 port 53954 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:19.970779 sshd[8137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:19.976927 systemd-logind[1482]: New session 78 of user core. Oct 9 01:22:19.982271 systemd[1]: Started session-78.scope - Session 78 of User core. Oct 9 01:22:20.715629 sshd[8137]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:20.718626 systemd[1]: sshd@77-188.245.175.223:22-139.178.68.195:53954.service: Deactivated successfully. Oct 9 01:22:20.720583 systemd[1]: session-78.scope: Deactivated successfully. Oct 9 01:22:20.721901 systemd-logind[1482]: Session 78 logged out. Waiting for processes to exit. Oct 9 01:22:20.723422 systemd-logind[1482]: Removed session 78. Oct 9 01:22:25.907412 systemd[1]: Started sshd@78-188.245.175.223:22-139.178.68.195:51320.service - OpenSSH per-connection server daemon (139.178.68.195:51320). Oct 9 01:22:26.988923 sshd[8158]: Accepted publickey for core from 139.178.68.195 port 51320 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:26.990545 sshd[8158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:26.994792 systemd-logind[1482]: New session 79 of user core. Oct 9 01:22:27.000136 systemd[1]: Started session-79.scope - Session 79 of User core. Oct 9 01:22:27.827782 sshd[8158]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:27.831223 systemd[1]: sshd@78-188.245.175.223:22-139.178.68.195:51320.service: Deactivated successfully. Oct 9 01:22:27.834544 systemd[1]: session-79.scope: Deactivated successfully. Oct 9 01:22:27.836332 systemd-logind[1482]: Session 79 logged out. Waiting for processes to exit. Oct 9 01:22:27.837891 systemd-logind[1482]: Removed session 79. Oct 9 01:22:33.010321 systemd[1]: Started sshd@79-188.245.175.223:22-139.178.68.195:40666.service - OpenSSH per-connection server daemon (139.178.68.195:40666). Oct 9 01:22:34.081247 sshd[8171]: Accepted publickey for core from 139.178.68.195 port 40666 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:34.083049 sshd[8171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:34.087556 systemd-logind[1482]: New session 80 of user core. Oct 9 01:22:34.093158 systemd[1]: Started session-80.scope - Session 80 of User core. Oct 9 01:22:34.892900 sshd[8171]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:34.897641 systemd[1]: sshd@79-188.245.175.223:22-139.178.68.195:40666.service: Deactivated successfully. Oct 9 01:22:34.900286 systemd[1]: session-80.scope: Deactivated successfully. Oct 9 01:22:34.901417 systemd-logind[1482]: Session 80 logged out. Waiting for processes to exit. Oct 9 01:22:34.902612 systemd-logind[1482]: Removed session 80. Oct 9 01:22:34.985473 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.PpyWiy.mount: Deactivated successfully. Oct 9 01:22:35.082328 systemd[1]: Started sshd@80-188.245.175.223:22-139.178.68.195:40670.service - OpenSSH per-connection server daemon (139.178.68.195:40670). Oct 9 01:22:36.205840 sshd[8222]: Accepted publickey for core from 139.178.68.195 port 40670 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:36.207612 sshd[8222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:36.212089 systemd-logind[1482]: New session 81 of user core. Oct 9 01:22:36.217214 systemd[1]: Started session-81.scope - Session 81 of User core. Oct 9 01:22:37.227585 sshd[8222]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:37.230840 systemd-logind[1482]: Session 81 logged out. Waiting for processes to exit. Oct 9 01:22:37.231452 systemd[1]: sshd@80-188.245.175.223:22-139.178.68.195:40670.service: Deactivated successfully. Oct 9 01:22:37.233491 systemd[1]: session-81.scope: Deactivated successfully. Oct 9 01:22:37.234341 systemd-logind[1482]: Removed session 81. Oct 9 01:22:37.421462 systemd[1]: Started sshd@81-188.245.175.223:22-139.178.68.195:40686.service - OpenSSH per-connection server daemon (139.178.68.195:40686). Oct 9 01:22:38.523590 sshd[8237]: Accepted publickey for core from 139.178.68.195 port 40686 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:38.525287 sshd[8237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:38.530791 systemd-logind[1482]: New session 82 of user core. Oct 9 01:22:38.534208 systemd[1]: Started session-82.scope - Session 82 of User core. Oct 9 01:22:40.882611 sshd[8237]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:40.892376 systemd[1]: sshd@81-188.245.175.223:22-139.178.68.195:40686.service: Deactivated successfully. Oct 9 01:22:40.894865 systemd[1]: session-82.scope: Deactivated successfully. Oct 9 01:22:40.897271 systemd-logind[1482]: Session 82 logged out. Waiting for processes to exit. Oct 9 01:22:40.898744 systemd-logind[1482]: Removed session 82. Oct 9 01:22:41.063896 systemd[1]: Started sshd@82-188.245.175.223:22-139.178.68.195:33282.service - OpenSSH per-connection server daemon (139.178.68.195:33282). Oct 9 01:22:42.169421 sshd[8261]: Accepted publickey for core from 139.178.68.195 port 33282 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:42.172270 sshd[8261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:42.177188 systemd-logind[1482]: New session 83 of user core. Oct 9 01:22:42.183157 systemd[1]: Started session-83.scope - Session 83 of User core. Oct 9 01:22:43.322192 sshd[8261]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:43.325243 systemd[1]: sshd@82-188.245.175.223:22-139.178.68.195:33282.service: Deactivated successfully. Oct 9 01:22:43.327676 systemd[1]: session-83.scope: Deactivated successfully. Oct 9 01:22:43.329282 systemd-logind[1482]: Session 83 logged out. Waiting for processes to exit. Oct 9 01:22:43.331269 systemd-logind[1482]: Removed session 83. Oct 9 01:22:43.509312 systemd[1]: Started sshd@83-188.245.175.223:22-139.178.68.195:33296.service - OpenSSH per-connection server daemon (139.178.68.195:33296). Oct 9 01:22:43.519279 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Oct 9 01:22:43.555830 systemd-tmpfiles[8272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:22:43.556709 systemd-tmpfiles[8272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:22:43.557609 systemd-tmpfiles[8272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:22:43.557864 systemd-tmpfiles[8272]: ACLs are not supported, ignoring. Oct 9 01:22:43.557934 systemd-tmpfiles[8272]: ACLs are not supported, ignoring. Oct 9 01:22:43.561838 systemd-tmpfiles[8272]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:22:43.561849 systemd-tmpfiles[8272]: Skipping /boot Oct 9 01:22:43.570624 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Oct 9 01:22:43.570990 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Oct 9 01:22:44.573075 sshd[8271]: Accepted publickey for core from 139.178.68.195 port 33296 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:44.575017 sshd[8271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:44.580327 systemd-logind[1482]: New session 84 of user core. Oct 9 01:22:44.585183 systemd[1]: Started session-84.scope - Session 84 of User core. Oct 9 01:22:44.688315 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.byPcQw.mount: Deactivated successfully. Oct 9 01:22:45.416602 sshd[8271]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:45.420148 systemd[1]: sshd@83-188.245.175.223:22-139.178.68.195:33296.service: Deactivated successfully. Oct 9 01:22:45.422732 systemd[1]: session-84.scope: Deactivated successfully. Oct 9 01:22:45.424929 systemd-logind[1482]: Session 84 logged out. Waiting for processes to exit. Oct 9 01:22:45.426871 systemd-logind[1482]: Removed session 84. Oct 9 01:22:50.619479 systemd[1]: Started sshd@84-188.245.175.223:22-139.178.68.195:33306.service - OpenSSH per-connection server daemon (139.178.68.195:33306). Oct 9 01:22:51.728836 sshd[8323]: Accepted publickey for core from 139.178.68.195 port 33306 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:51.730884 sshd[8323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:51.735867 systemd-logind[1482]: New session 85 of user core. Oct 9 01:22:51.742270 systemd[1]: Started session-85.scope - Session 85 of User core. Oct 9 01:22:52.506690 sshd[8323]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:52.510720 systemd[1]: sshd@84-188.245.175.223:22-139.178.68.195:33306.service: Deactivated successfully. Oct 9 01:22:52.512924 systemd[1]: session-85.scope: Deactivated successfully. Oct 9 01:22:52.513992 systemd-logind[1482]: Session 85 logged out. Waiting for processes to exit. Oct 9 01:22:52.515170 systemd-logind[1482]: Removed session 85. Oct 9 01:22:57.686290 systemd[1]: Started sshd@85-188.245.175.223:22-139.178.68.195:53962.service - OpenSSH per-connection server daemon (139.178.68.195:53962). Oct 9 01:22:58.670264 sshd[8341]: Accepted publickey for core from 139.178.68.195 port 53962 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:22:58.672053 sshd[8341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:22:58.677265 systemd-logind[1482]: New session 86 of user core. Oct 9 01:22:58.683214 systemd[1]: Started session-86.scope - Session 86 of User core. Oct 9 01:22:59.424070 sshd[8341]: pam_unix(sshd:session): session closed for user core Oct 9 01:22:59.428517 systemd[1]: sshd@85-188.245.175.223:22-139.178.68.195:53962.service: Deactivated successfully. Oct 9 01:22:59.430952 systemd[1]: session-86.scope: Deactivated successfully. Oct 9 01:22:59.431844 systemd-logind[1482]: Session 86 logged out. Waiting for processes to exit. Oct 9 01:22:59.433812 systemd-logind[1482]: Removed session 86. Oct 9 01:23:04.624239 systemd[1]: Started sshd@86-188.245.175.223:22-139.178.68.195:48648.service - OpenSSH per-connection server daemon (139.178.68.195:48648). Oct 9 01:23:05.720089 sshd[8354]: Accepted publickey for core from 139.178.68.195 port 48648 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:05.721961 sshd[8354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:05.728882 systemd-logind[1482]: New session 87 of user core. Oct 9 01:23:05.734177 systemd[1]: Started session-87.scope - Session 87 of User core. Oct 9 01:23:06.531638 sshd[8354]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:06.536171 systemd[1]: sshd@86-188.245.175.223:22-139.178.68.195:48648.service: Deactivated successfully. Oct 9 01:23:06.538491 systemd[1]: session-87.scope: Deactivated successfully. Oct 9 01:23:06.539492 systemd-logind[1482]: Session 87 logged out. Waiting for processes to exit. Oct 9 01:23:06.540517 systemd-logind[1482]: Removed session 87. Oct 9 01:23:11.712239 systemd[1]: Started sshd@87-188.245.175.223:22-139.178.68.195:55744.service - OpenSSH per-connection server daemon (139.178.68.195:55744). Oct 9 01:23:12.820117 sshd[8395]: Accepted publickey for core from 139.178.68.195 port 55744 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:12.821848 sshd[8395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:12.826992 systemd-logind[1482]: New session 88 of user core. Oct 9 01:23:12.835164 systemd[1]: Started session-88.scope - Session 88 of User core. Oct 9 01:23:13.646980 sshd[8395]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:13.652314 systemd[1]: sshd@87-188.245.175.223:22-139.178.68.195:55744.service: Deactivated successfully. Oct 9 01:23:13.654897 systemd[1]: session-88.scope: Deactivated successfully. Oct 9 01:23:13.655904 systemd-logind[1482]: Session 88 logged out. Waiting for processes to exit. Oct 9 01:23:13.657655 systemd-logind[1482]: Removed session 88. Oct 9 01:23:18.843296 systemd[1]: Started sshd@88-188.245.175.223:22-139.178.68.195:55748.service - OpenSSH per-connection server daemon (139.178.68.195:55748). Oct 9 01:23:19.917162 sshd[8433]: Accepted publickey for core from 139.178.68.195 port 55748 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:19.918648 sshd[8433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:19.922923 systemd-logind[1482]: New session 89 of user core. Oct 9 01:23:19.927128 systemd[1]: Started session-89.scope - Session 89 of User core. Oct 9 01:23:20.758162 sshd[8433]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:20.761078 systemd[1]: sshd@88-188.245.175.223:22-139.178.68.195:55748.service: Deactivated successfully. Oct 9 01:23:20.763201 systemd[1]: session-89.scope: Deactivated successfully. Oct 9 01:23:20.764829 systemd-logind[1482]: Session 89 logged out. Waiting for processes to exit. Oct 9 01:23:20.766617 systemd-logind[1482]: Removed session 89. Oct 9 01:23:25.939333 systemd[1]: Started sshd@89-188.245.175.223:22-139.178.68.195:43894.service - OpenSSH per-connection server daemon (139.178.68.195:43894). Oct 9 01:23:26.937405 sshd[8447]: Accepted publickey for core from 139.178.68.195 port 43894 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:26.939674 sshd[8447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:26.945786 systemd-logind[1482]: New session 90 of user core. Oct 9 01:23:26.952209 systemd[1]: Started session-90.scope - Session 90 of User core. Oct 9 01:23:27.715928 sshd[8447]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:27.720036 systemd[1]: sshd@89-188.245.175.223:22-139.178.68.195:43894.service: Deactivated successfully. Oct 9 01:23:27.722335 systemd[1]: session-90.scope: Deactivated successfully. Oct 9 01:23:27.723045 systemd-logind[1482]: Session 90 logged out. Waiting for processes to exit. Oct 9 01:23:27.724519 systemd-logind[1482]: Removed session 90. Oct 9 01:23:32.911366 systemd[1]: Started sshd@90-188.245.175.223:22-139.178.68.195:35752.service - OpenSSH per-connection server daemon (139.178.68.195:35752). Oct 9 01:23:33.944375 sshd[8466]: Accepted publickey for core from 139.178.68.195 port 35752 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:33.945868 sshd[8466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:33.950413 systemd-logind[1482]: New session 91 of user core. Oct 9 01:23:33.955152 systemd[1]: Started session-91.scope - Session 91 of User core. Oct 9 01:23:34.784556 sshd[8466]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:34.789127 systemd-logind[1482]: Session 91 logged out. Waiting for processes to exit. Oct 9 01:23:34.790244 systemd[1]: sshd@90-188.245.175.223:22-139.178.68.195:35752.service: Deactivated successfully. Oct 9 01:23:34.793939 systemd[1]: session-91.scope: Deactivated successfully. Oct 9 01:23:34.796203 systemd-logind[1482]: Removed session 91. Oct 9 01:23:39.984276 systemd[1]: Started sshd@91-188.245.175.223:22-139.178.68.195:35764.service - OpenSSH per-connection server daemon (139.178.68.195:35764). Oct 9 01:23:41.091895 sshd[8528]: Accepted publickey for core from 139.178.68.195 port 35764 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:41.093643 sshd[8528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:41.098605 systemd-logind[1482]: New session 92 of user core. Oct 9 01:23:41.107171 systemd[1]: Started session-92.scope - Session 92 of User core. Oct 9 01:23:41.890510 sshd[8528]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:41.893159 systemd[1]: sshd@91-188.245.175.223:22-139.178.68.195:35764.service: Deactivated successfully. Oct 9 01:23:41.894938 systemd[1]: session-92.scope: Deactivated successfully. Oct 9 01:23:41.896511 systemd-logind[1482]: Session 92 logged out. Waiting for processes to exit. Oct 9 01:23:41.898066 systemd-logind[1482]: Removed session 92. Oct 9 01:23:47.082275 systemd[1]: Started sshd@92-188.245.175.223:22-139.178.68.195:32868.service - OpenSSH per-connection server daemon (139.178.68.195:32868). Oct 9 01:23:48.142300 sshd[8560]: Accepted publickey for core from 139.178.68.195 port 32868 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:48.144253 sshd[8560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:48.149173 systemd-logind[1482]: New session 93 of user core. Oct 9 01:23:48.157206 systemd[1]: Started session-93.scope - Session 93 of User core. Oct 9 01:23:48.974371 sshd[8560]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:48.978754 systemd[1]: sshd@92-188.245.175.223:22-139.178.68.195:32868.service: Deactivated successfully. Oct 9 01:23:48.981334 systemd[1]: session-93.scope: Deactivated successfully. Oct 9 01:23:48.982141 systemd-logind[1482]: Session 93 logged out. Waiting for processes to exit. Oct 9 01:23:48.983443 systemd-logind[1482]: Removed session 93. Oct 9 01:23:54.168669 systemd[1]: Started sshd@93-188.245.175.223:22-139.178.68.195:52218.service - OpenSSH per-connection server daemon (139.178.68.195:52218). Oct 9 01:23:55.242961 sshd[8584]: Accepted publickey for core from 139.178.68.195 port 52218 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:23:55.244755 sshd[8584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:23:55.249674 systemd-logind[1482]: New session 94 of user core. Oct 9 01:23:55.256192 systemd[1]: Started session-94.scope - Session 94 of User core. Oct 9 01:23:56.073948 sshd[8584]: pam_unix(sshd:session): session closed for user core Oct 9 01:23:56.077967 systemd[1]: sshd@93-188.245.175.223:22-139.178.68.195:52218.service: Deactivated successfully. Oct 9 01:23:56.080238 systemd[1]: session-94.scope: Deactivated successfully. Oct 9 01:23:56.080977 systemd-logind[1482]: Session 94 logged out. Waiting for processes to exit. Oct 9 01:23:56.082339 systemd-logind[1482]: Removed session 94. Oct 9 01:24:01.272826 systemd[1]: Started sshd@94-188.245.175.223:22-139.178.68.195:60182.service - OpenSSH per-connection server daemon (139.178.68.195:60182). Oct 9 01:24:02.387872 sshd[8603]: Accepted publickey for core from 139.178.68.195 port 60182 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:02.389798 sshd[8603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:02.395300 systemd-logind[1482]: New session 95 of user core. Oct 9 01:24:02.400186 systemd[1]: Started session-95.scope - Session 95 of User core. Oct 9 01:24:03.179065 sshd[8603]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:03.182379 systemd[1]: sshd@94-188.245.175.223:22-139.178.68.195:60182.service: Deactivated successfully. Oct 9 01:24:03.184798 systemd[1]: session-95.scope: Deactivated successfully. Oct 9 01:24:03.186841 systemd-logind[1482]: Session 95 logged out. Waiting for processes to exit. Oct 9 01:24:03.188491 systemd-logind[1482]: Removed session 95. Oct 9 01:24:08.376263 systemd[1]: Started sshd@95-188.245.175.223:22-139.178.68.195:60190.service - OpenSSH per-connection server daemon (139.178.68.195:60190). Oct 9 01:24:09.512365 sshd[8641]: Accepted publickey for core from 139.178.68.195 port 60190 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:09.514001 sshd[8641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:09.518706 systemd-logind[1482]: New session 96 of user core. Oct 9 01:24:09.525142 systemd[1]: Started session-96.scope - Session 96 of User core. Oct 9 01:24:10.354059 sshd[8641]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:10.358471 systemd[1]: sshd@95-188.245.175.223:22-139.178.68.195:60190.service: Deactivated successfully. Oct 9 01:24:10.360932 systemd[1]: session-96.scope: Deactivated successfully. Oct 9 01:24:10.361694 systemd-logind[1482]: Session 96 logged out. Waiting for processes to exit. Oct 9 01:24:10.362922 systemd-logind[1482]: Removed session 96. Oct 9 01:24:15.534500 systemd[1]: Started sshd@96-188.245.175.223:22-139.178.68.195:41982.service - OpenSSH per-connection server daemon (139.178.68.195:41982). Oct 9 01:24:16.603168 sshd[8678]: Accepted publickey for core from 139.178.68.195 port 41982 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:16.604884 sshd[8678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:16.610345 systemd-logind[1482]: New session 97 of user core. Oct 9 01:24:16.617184 systemd[1]: Started session-97.scope - Session 97 of User core. Oct 9 01:24:17.408272 sshd[8678]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:17.412797 systemd[1]: sshd@96-188.245.175.223:22-139.178.68.195:41982.service: Deactivated successfully. Oct 9 01:24:17.414797 systemd[1]: session-97.scope: Deactivated successfully. Oct 9 01:24:17.415973 systemd-logind[1482]: Session 97 logged out. Waiting for processes to exit. Oct 9 01:24:17.417367 systemd-logind[1482]: Removed session 97. Oct 9 01:24:22.606987 systemd[1]: Started sshd@97-188.245.175.223:22-139.178.68.195:45378.service - OpenSSH per-connection server daemon (139.178.68.195:45378). Oct 9 01:24:23.713665 sshd[8698]: Accepted publickey for core from 139.178.68.195 port 45378 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:23.715435 sshd[8698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:23.720537 systemd-logind[1482]: New session 98 of user core. Oct 9 01:24:23.724180 systemd[1]: Started session-98.scope - Session 98 of User core. Oct 9 01:24:24.557893 sshd[8698]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:24.563166 systemd[1]: sshd@97-188.245.175.223:22-139.178.68.195:45378.service: Deactivated successfully. Oct 9 01:24:24.565451 systemd[1]: session-98.scope: Deactivated successfully. Oct 9 01:24:24.566370 systemd-logind[1482]: Session 98 logged out. Waiting for processes to exit. Oct 9 01:24:24.568331 systemd-logind[1482]: Removed session 98. Oct 9 01:24:29.740275 systemd[1]: Started sshd@98-188.245.175.223:22-139.178.68.195:45394.service - OpenSSH per-connection server daemon (139.178.68.195:45394). Oct 9 01:24:30.731718 sshd[8725]: Accepted publickey for core from 139.178.68.195 port 45394 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:30.734661 sshd[8725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:30.741285 systemd-logind[1482]: New session 99 of user core. Oct 9 01:24:30.746191 systemd[1]: Started session-99.scope - Session 99 of User core. Oct 9 01:24:31.487898 sshd[8725]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:31.491682 systemd[1]: sshd@98-188.245.175.223:22-139.178.68.195:45394.service: Deactivated successfully. Oct 9 01:24:31.493512 systemd[1]: session-99.scope: Deactivated successfully. Oct 9 01:24:31.494155 systemd-logind[1482]: Session 99 logged out. Waiting for processes to exit. Oct 9 01:24:31.495441 systemd-logind[1482]: Removed session 99. Oct 9 01:24:34.988734 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.1fSKOs.mount: Deactivated successfully. Oct 9 01:24:35.015885 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.hnAWDl.mount: Deactivated successfully. Oct 9 01:24:36.678875 systemd[1]: Started sshd@99-188.245.175.223:22-139.178.68.195:60074.service - OpenSSH per-connection server daemon (139.178.68.195:60074). Oct 9 01:24:37.752085 sshd[8784]: Accepted publickey for core from 139.178.68.195 port 60074 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:37.753661 sshd[8784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:37.758224 systemd-logind[1482]: New session 100 of user core. Oct 9 01:24:37.763198 systemd[1]: Started session-100.scope - Session 100 of User core. Oct 9 01:24:38.556334 sshd[8784]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:38.560194 systemd-logind[1482]: Session 100 logged out. Waiting for processes to exit. Oct 9 01:24:38.560784 systemd[1]: sshd@99-188.245.175.223:22-139.178.68.195:60074.service: Deactivated successfully. Oct 9 01:24:38.562929 systemd[1]: session-100.scope: Deactivated successfully. Oct 9 01:24:38.564194 systemd-logind[1482]: Removed session 100. Oct 9 01:24:43.751438 systemd[1]: Started sshd@100-188.245.175.223:22-139.178.68.195:39582.service - OpenSSH per-connection server daemon (139.178.68.195:39582). Oct 9 01:24:44.848404 sshd[8804]: Accepted publickey for core from 139.178.68.195 port 39582 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:44.851294 sshd[8804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:44.859669 systemd-logind[1482]: New session 101 of user core. Oct 9 01:24:44.865279 systemd[1]: Started session-101.scope - Session 101 of User core. Oct 9 01:24:45.682874 sshd[8804]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:45.687243 systemd[1]: sshd@100-188.245.175.223:22-139.178.68.195:39582.service: Deactivated successfully. Oct 9 01:24:45.689269 systemd[1]: session-101.scope: Deactivated successfully. Oct 9 01:24:45.690235 systemd-logind[1482]: Session 101 logged out. Waiting for processes to exit. Oct 9 01:24:45.691336 systemd-logind[1482]: Removed session 101. Oct 9 01:24:50.876828 systemd[1]: Started sshd@101-188.245.175.223:22-139.178.68.195:44770.service - OpenSSH per-connection server daemon (139.178.68.195:44770). Oct 9 01:24:51.965469 sshd[8836]: Accepted publickey for core from 139.178.68.195 port 44770 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:51.967294 sshd[8836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:51.971647 systemd-logind[1482]: New session 102 of user core. Oct 9 01:24:51.976178 systemd[1]: Started session-102.scope - Session 102 of User core. Oct 9 01:24:52.740981 sshd[8836]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:52.745851 systemd[1]: sshd@101-188.245.175.223:22-139.178.68.195:44770.service: Deactivated successfully. Oct 9 01:24:52.748178 systemd[1]: session-102.scope: Deactivated successfully. Oct 9 01:24:52.749403 systemd-logind[1482]: Session 102 logged out. Waiting for processes to exit. Oct 9 01:24:52.750584 systemd-logind[1482]: Removed session 102. Oct 9 01:24:57.921394 systemd[1]: Started sshd@102-188.245.175.223:22-139.178.68.195:44778.service - OpenSSH per-connection server daemon (139.178.68.195:44778). Oct 9 01:24:58.953595 sshd[8854]: Accepted publickey for core from 139.178.68.195 port 44778 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:24:58.955432 sshd[8854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:24:58.960249 systemd-logind[1482]: New session 103 of user core. Oct 9 01:24:58.965153 systemd[1]: Started session-103.scope - Session 103 of User core. Oct 9 01:24:59.741191 sshd[8854]: pam_unix(sshd:session): session closed for user core Oct 9 01:24:59.744610 systemd[1]: sshd@102-188.245.175.223:22-139.178.68.195:44778.service: Deactivated successfully. Oct 9 01:24:59.746540 systemd[1]: session-103.scope: Deactivated successfully. Oct 9 01:24:59.747154 systemd-logind[1482]: Session 103 logged out. Waiting for processes to exit. Oct 9 01:24:59.748401 systemd-logind[1482]: Removed session 103. Oct 9 01:25:04.930279 systemd[1]: Started sshd@103-188.245.175.223:22-139.178.68.195:39860.service - OpenSSH per-connection server daemon (139.178.68.195:39860). Oct 9 01:25:06.026721 sshd[8873]: Accepted publickey for core from 139.178.68.195 port 39860 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:06.028943 sshd[8873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:06.033638 systemd-logind[1482]: New session 104 of user core. Oct 9 01:25:06.042146 systemd[1]: Started session-104.scope - Session 104 of User core. Oct 9 01:25:06.824019 sshd[8873]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:06.828476 systemd[1]: sshd@103-188.245.175.223:22-139.178.68.195:39860.service: Deactivated successfully. Oct 9 01:25:06.830742 systemd[1]: session-104.scope: Deactivated successfully. Oct 9 01:25:06.832283 systemd-logind[1482]: Session 104 logged out. Waiting for processes to exit. Oct 9 01:25:06.833848 systemd-logind[1482]: Removed session 104. Oct 9 01:25:12.009180 systemd[1]: Started sshd@104-188.245.175.223:22-139.178.68.195:38428.service - OpenSSH per-connection server daemon (139.178.68.195:38428). Oct 9 01:25:13.066599 sshd[8912]: Accepted publickey for core from 139.178.68.195 port 38428 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:13.068355 sshd[8912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:13.073897 systemd-logind[1482]: New session 105 of user core. Oct 9 01:25:13.080227 systemd[1]: Started session-105.scope - Session 105 of User core. Oct 9 01:25:13.850556 sshd[8912]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:13.855212 systemd-logind[1482]: Session 105 logged out. Waiting for processes to exit. Oct 9 01:25:13.856167 systemd[1]: sshd@104-188.245.175.223:22-139.178.68.195:38428.service: Deactivated successfully. Oct 9 01:25:13.858431 systemd[1]: session-105.scope: Deactivated successfully. Oct 9 01:25:13.859415 systemd-logind[1482]: Removed session 105. Oct 9 01:25:19.043224 systemd[1]: Started sshd@105-188.245.175.223:22-139.178.68.195:38430.service - OpenSSH per-connection server daemon (139.178.68.195:38430). Oct 9 01:25:20.162196 sshd[8949]: Accepted publickey for core from 139.178.68.195 port 38430 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:20.165722 sshd[8949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:20.173067 systemd-logind[1482]: New session 106 of user core. Oct 9 01:25:20.178218 systemd[1]: Started session-106.scope - Session 106 of User core. Oct 9 01:25:20.984381 sshd[8949]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:20.988120 systemd-logind[1482]: Session 106 logged out. Waiting for processes to exit. Oct 9 01:25:20.988821 systemd[1]: sshd@105-188.245.175.223:22-139.178.68.195:38430.service: Deactivated successfully. Oct 9 01:25:20.990618 systemd[1]: session-106.scope: Deactivated successfully. Oct 9 01:25:20.991504 systemd-logind[1482]: Removed session 106. Oct 9 01:25:26.180293 systemd[1]: Started sshd@106-188.245.175.223:22-139.178.68.195:57710.service - OpenSSH per-connection server daemon (139.178.68.195:57710). Oct 9 01:25:27.280558 sshd[8969]: Accepted publickey for core from 139.178.68.195 port 57710 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:27.282316 sshd[8969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:27.287197 systemd-logind[1482]: New session 107 of user core. Oct 9 01:25:27.292147 systemd[1]: Started session-107.scope - Session 107 of User core. Oct 9 01:25:28.079152 sshd[8969]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:28.082578 systemd-logind[1482]: Session 107 logged out. Waiting for processes to exit. Oct 9 01:25:28.083390 systemd[1]: sshd@106-188.245.175.223:22-139.178.68.195:57710.service: Deactivated successfully. Oct 9 01:25:28.085484 systemd[1]: session-107.scope: Deactivated successfully. Oct 9 01:25:28.086615 systemd-logind[1482]: Removed session 107. Oct 9 01:25:33.268317 systemd[1]: Started sshd@107-188.245.175.223:22-139.178.68.195:48634.service - OpenSSH per-connection server daemon (139.178.68.195:48634). Oct 9 01:25:34.306542 sshd[8982]: Accepted publickey for core from 139.178.68.195 port 48634 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:34.308722 sshd[8982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:34.313829 systemd-logind[1482]: New session 108 of user core. Oct 9 01:25:34.318278 systemd[1]: Started session-108.scope - Session 108 of User core. Oct 9 01:25:35.032196 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.chyIv9.mount: Deactivated successfully. Oct 9 01:25:35.057170 sshd[8982]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:35.061858 systemd[1]: sshd@107-188.245.175.223:22-139.178.68.195:48634.service: Deactivated successfully. Oct 9 01:25:35.064613 systemd[1]: session-108.scope: Deactivated successfully. Oct 9 01:25:35.069014 systemd-logind[1482]: Session 108 logged out. Waiting for processes to exit. Oct 9 01:25:35.071129 systemd-logind[1482]: Removed session 108. Oct 9 01:25:40.259423 systemd[1]: Started sshd@108-188.245.175.223:22-139.178.68.195:48638.service - OpenSSH per-connection server daemon (139.178.68.195:48638). Oct 9 01:25:41.324662 sshd[9044]: Accepted publickey for core from 139.178.68.195 port 48638 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:41.326405 sshd[9044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:41.331124 systemd-logind[1482]: New session 109 of user core. Oct 9 01:25:41.337159 systemd[1]: Started session-109.scope - Session 109 of User core. Oct 9 01:25:42.137789 sshd[9044]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:42.141120 systemd[1]: sshd@108-188.245.175.223:22-139.178.68.195:48638.service: Deactivated successfully. Oct 9 01:25:42.142967 systemd[1]: session-109.scope: Deactivated successfully. Oct 9 01:25:42.144387 systemd-logind[1482]: Session 109 logged out. Waiting for processes to exit. Oct 9 01:25:42.146125 systemd-logind[1482]: Removed session 109. Oct 9 01:25:44.668382 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.11hLrb.mount: Deactivated successfully. Oct 9 01:25:47.322872 systemd[1]: Started sshd@109-188.245.175.223:22-139.178.68.195:36000.service - OpenSSH per-connection server daemon (139.178.68.195:36000). Oct 9 01:25:48.451729 sshd[9079]: Accepted publickey for core from 139.178.68.195 port 36000 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:48.453398 sshd[9079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:48.457828 systemd-logind[1482]: New session 110 of user core. Oct 9 01:25:48.464168 systemd[1]: Started session-110.scope - Session 110 of User core. Oct 9 01:25:49.263695 sshd[9079]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:49.267448 systemd[1]: sshd@109-188.245.175.223:22-139.178.68.195:36000.service: Deactivated successfully. Oct 9 01:25:49.269284 systemd[1]: session-110.scope: Deactivated successfully. Oct 9 01:25:49.269969 systemd-logind[1482]: Session 110 logged out. Waiting for processes to exit. Oct 9 01:25:49.271015 systemd-logind[1482]: Removed session 110. Oct 9 01:25:54.456519 systemd[1]: Started sshd@110-188.245.175.223:22-139.178.68.195:60372.service - OpenSSH per-connection server daemon (139.178.68.195:60372). Oct 9 01:25:55.532693 sshd[9093]: Accepted publickey for core from 139.178.68.195 port 60372 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:25:55.534748 sshd[9093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:25:55.538906 systemd-logind[1482]: New session 111 of user core. Oct 9 01:25:55.546223 systemd[1]: Started session-111.scope - Session 111 of User core. Oct 9 01:25:56.311212 sshd[9093]: pam_unix(sshd:session): session closed for user core Oct 9 01:25:56.316431 systemd-logind[1482]: Session 111 logged out. Waiting for processes to exit. Oct 9 01:25:56.316791 systemd[1]: sshd@110-188.245.175.223:22-139.178.68.195:60372.service: Deactivated successfully. Oct 9 01:25:56.319744 systemd[1]: session-111.scope: Deactivated successfully. Oct 9 01:25:56.322949 systemd-logind[1482]: Removed session 111. Oct 9 01:26:01.512440 systemd[1]: Started sshd@111-188.245.175.223:22-139.178.68.195:35588.service - OpenSSH per-connection server daemon (139.178.68.195:35588). Oct 9 01:26:02.639268 sshd[9123]: Accepted publickey for core from 139.178.68.195 port 35588 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:02.640956 sshd[9123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:02.645991 systemd-logind[1482]: New session 112 of user core. Oct 9 01:26:02.650180 systemd[1]: Started session-112.scope - Session 112 of User core. Oct 9 01:26:03.483107 sshd[9123]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:03.487151 systemd[1]: sshd@111-188.245.175.223:22-139.178.68.195:35588.service: Deactivated successfully. Oct 9 01:26:03.489662 systemd[1]: session-112.scope: Deactivated successfully. Oct 9 01:26:03.490398 systemd-logind[1482]: Session 112 logged out. Waiting for processes to exit. Oct 9 01:26:03.491558 systemd-logind[1482]: Removed session 112. Oct 9 01:26:08.660282 systemd[1]: Started sshd@112-188.245.175.223:22-139.178.68.195:35604.service - OpenSSH per-connection server daemon (139.178.68.195:35604). Oct 9 01:26:09.700963 sshd[9165]: Accepted publickey for core from 139.178.68.195 port 35604 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:09.702899 sshd[9165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:09.708109 systemd-logind[1482]: New session 113 of user core. Oct 9 01:26:09.712163 systemd[1]: Started session-113.scope - Session 113 of User core. Oct 9 01:26:10.518101 sshd[9165]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:10.522919 systemd[1]: sshd@112-188.245.175.223:22-139.178.68.195:35604.service: Deactivated successfully. Oct 9 01:26:10.525124 systemd[1]: session-113.scope: Deactivated successfully. Oct 9 01:26:10.525807 systemd-logind[1482]: Session 113 logged out. Waiting for processes to exit. Oct 9 01:26:10.526990 systemd-logind[1482]: Removed session 113. Oct 9 01:26:15.713295 systemd[1]: Started sshd@113-188.245.175.223:22-139.178.68.195:54478.service - OpenSSH per-connection server daemon (139.178.68.195:54478). Oct 9 01:26:16.813898 sshd[9198]: Accepted publickey for core from 139.178.68.195 port 54478 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:16.815756 sshd[9198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:16.820956 systemd-logind[1482]: New session 114 of user core. Oct 9 01:26:16.827152 systemd[1]: Started session-114.scope - Session 114 of User core. Oct 9 01:26:17.616998 sshd[9198]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:17.620759 systemd[1]: sshd@113-188.245.175.223:22-139.178.68.195:54478.service: Deactivated successfully. Oct 9 01:26:17.622651 systemd[1]: session-114.scope: Deactivated successfully. Oct 9 01:26:17.623782 systemd-logind[1482]: Session 114 logged out. Waiting for processes to exit. Oct 9 01:26:17.624970 systemd-logind[1482]: Removed session 114. Oct 9 01:26:22.798227 systemd[1]: Started sshd@114-188.245.175.223:22-139.178.68.195:43528.service - OpenSSH per-connection server daemon (139.178.68.195:43528). Oct 9 01:26:23.881797 sshd[9216]: Accepted publickey for core from 139.178.68.195 port 43528 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:23.883660 sshd[9216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:23.887954 systemd-logind[1482]: New session 115 of user core. Oct 9 01:26:23.892154 systemd[1]: Started session-115.scope - Session 115 of User core. Oct 9 01:26:24.726777 sshd[9216]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:24.729480 systemd[1]: sshd@114-188.245.175.223:22-139.178.68.195:43528.service: Deactivated successfully. Oct 9 01:26:24.731408 systemd[1]: session-115.scope: Deactivated successfully. Oct 9 01:26:24.733019 systemd-logind[1482]: Session 115 logged out. Waiting for processes to exit. Oct 9 01:26:24.734494 systemd-logind[1482]: Removed session 115. Oct 9 01:26:29.915958 systemd[1]: Started sshd@115-188.245.175.223:22-139.178.68.195:43544.service - OpenSSH per-connection server daemon (139.178.68.195:43544). Oct 9 01:26:31.040518 sshd[9236]: Accepted publickey for core from 139.178.68.195 port 43544 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:31.042314 sshd[9236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:31.047692 systemd-logind[1482]: New session 116 of user core. Oct 9 01:26:31.052157 systemd[1]: Started session-116.scope - Session 116 of User core. Oct 9 01:26:31.865199 sshd[9236]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:31.869089 systemd-logind[1482]: Session 116 logged out. Waiting for processes to exit. Oct 9 01:26:31.869677 systemd[1]: sshd@115-188.245.175.223:22-139.178.68.195:43544.service: Deactivated successfully. Oct 9 01:26:31.871565 systemd[1]: session-116.scope: Deactivated successfully. Oct 9 01:26:31.872771 systemd-logind[1482]: Removed session 116. Oct 9 01:26:34.983134 systemd[1]: run-containerd-runc-k8s.io-e5424af065045806288a9d916c155a724f3c9521c4255e4d0e8339ea49d27c66-runc.W3wEvO.mount: Deactivated successfully. Oct 9 01:26:37.051353 systemd[1]: Started sshd@116-188.245.175.223:22-139.178.68.195:42262.service - OpenSSH per-connection server daemon (139.178.68.195:42262). Oct 9 01:26:38.118807 sshd[9289]: Accepted publickey for core from 139.178.68.195 port 42262 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:38.120796 sshd[9289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:38.126240 systemd-logind[1482]: New session 117 of user core. Oct 9 01:26:38.131167 systemd[1]: Started session-117.scope - Session 117 of User core. Oct 9 01:26:38.941452 sshd[9289]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:38.947568 systemd[1]: sshd@116-188.245.175.223:22-139.178.68.195:42262.service: Deactivated successfully. Oct 9 01:26:38.951197 systemd[1]: session-117.scope: Deactivated successfully. Oct 9 01:26:38.952425 systemd-logind[1482]: Session 117 logged out. Waiting for processes to exit. Oct 9 01:26:38.954672 systemd-logind[1482]: Removed session 117. Oct 9 01:26:44.123355 systemd[1]: Started sshd@117-188.245.175.223:22-139.178.68.195:53736.service - OpenSSH per-connection server daemon (139.178.68.195:53736). Oct 9 01:26:44.672990 systemd[1]: run-containerd-runc-k8s.io-c65ad9dbda4391ec09500f9e099bbb36834086f7edeaecb323961b84e6cb172d-runc.KuZ9Me.mount: Deactivated successfully. Oct 9 01:26:45.151648 sshd[9309]: Accepted publickey for core from 139.178.68.195 port 53736 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:45.153666 sshd[9309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:45.160304 systemd-logind[1482]: New session 118 of user core. Oct 9 01:26:45.166233 systemd[1]: Started session-118.scope - Session 118 of User core. Oct 9 01:26:45.941325 sshd[9309]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:45.944901 systemd[1]: sshd@117-188.245.175.223:22-139.178.68.195:53736.service: Deactivated successfully. Oct 9 01:26:45.947638 systemd[1]: session-118.scope: Deactivated successfully. Oct 9 01:26:45.948559 systemd-logind[1482]: Session 118 logged out. Waiting for processes to exit. Oct 9 01:26:45.949601 systemd-logind[1482]: Removed session 118. Oct 9 01:26:51.135601 systemd[1]: Started sshd@118-188.245.175.223:22-139.178.68.195:51068.service - OpenSSH per-connection server daemon (139.178.68.195:51068). Oct 9 01:26:52.261462 sshd[9348]: Accepted publickey for core from 139.178.68.195 port 51068 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:26:52.263502 sshd[9348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:26:52.268161 systemd-logind[1482]: New session 119 of user core. Oct 9 01:26:52.276188 systemd[1]: Started session-119.scope - Session 119 of User core. Oct 9 01:26:53.066950 sshd[9348]: pam_unix(sshd:session): session closed for user core Oct 9 01:26:53.070600 systemd-logind[1482]: Session 119 logged out. Waiting for processes to exit. Oct 9 01:26:53.073986 systemd[1]: sshd@118-188.245.175.223:22-139.178.68.195:51068.service: Deactivated successfully. Oct 9 01:26:53.077728 systemd[1]: session-119.scope: Deactivated successfully. Oct 9 01:26:53.079450 systemd-logind[1482]: Removed session 119. Oct 9 01:27:08.177911 systemd[1]: cri-containerd-7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5.scope: Deactivated successfully. Oct 9 01:27:08.178225 systemd[1]: cri-containerd-7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5.scope: Consumed 12.532s CPU time, 20.0M memory peak, 0B memory swap peak. Oct 9 01:27:08.249487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5-rootfs.mount: Deactivated successfully. Oct 9 01:27:08.261065 containerd[1498]: time="2024-10-09T01:27:08.245595422Z" level=info msg="shim disconnected" id=7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5 namespace=k8s.io Oct 9 01:27:08.261065 containerd[1498]: time="2024-10-09T01:27:08.261063659Z" level=warning msg="cleaning up after shim disconnected" id=7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5 namespace=k8s.io Oct 9 01:27:08.262383 containerd[1498]: time="2024-10-09T01:27:08.261078858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:27:08.368507 kubelet[2753]: I1009 01:27:08.368465 2753 scope.go:117] "RemoveContainer" containerID="7745c0a39e7226c713af60ca57e8b0dd96a1036a5e1612b1870cd6b5cdd4b8f5" Oct 9 01:27:08.380546 containerd[1498]: time="2024-10-09T01:27:08.380501354Z" level=info msg="CreateContainer within sandbox \"81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 9 01:27:08.403194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145085831.mount: Deactivated successfully. Oct 9 01:27:08.413661 containerd[1498]: time="2024-10-09T01:27:08.413616785Z" level=info msg="CreateContainer within sandbox \"81df84c49878a34ea77888da8db3b8474e00a68ef693d2083dfe6b477f217ad2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a24aabb051919feb55fc3a50e8f4a2c73de034ee8aa95de83995620edc12b647\"" Oct 9 01:27:08.414946 containerd[1498]: time="2024-10-09T01:27:08.414004358Z" level=info msg="StartContainer for \"a24aabb051919feb55fc3a50e8f4a2c73de034ee8aa95de83995620edc12b647\"" Oct 9 01:27:08.451411 systemd[1]: Started cri-containerd-a24aabb051919feb55fc3a50e8f4a2c73de034ee8aa95de83995620edc12b647.scope - libcontainer container a24aabb051919feb55fc3a50e8f4a2c73de034ee8aa95de83995620edc12b647. Oct 9 01:27:08.493577 containerd[1498]: time="2024-10-09T01:27:08.493511418Z" level=info msg="StartContainer for \"a24aabb051919feb55fc3a50e8f4a2c73de034ee8aa95de83995620edc12b647\" returns successfully" Oct 9 01:27:08.627132 kubelet[2753]: E1009 01:27:08.627074 2753 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35830->10.0.0.2:2379: read: connection timed out" Oct 9 01:27:09.186017 systemd[1]: cri-containerd-49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20.scope: Deactivated successfully. Oct 9 01:27:09.187574 systemd[1]: cri-containerd-49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20.scope: Consumed 9.479s CPU time. Oct 9 01:27:09.217177 containerd[1498]: time="2024-10-09T01:27:09.217090792Z" level=info msg="shim disconnected" id=49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20 namespace=k8s.io Oct 9 01:27:09.217177 containerd[1498]: time="2024-10-09T01:27:09.217172736Z" level=warning msg="cleaning up after shim disconnected" id=49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20 namespace=k8s.io Oct 9 01:27:09.217446 containerd[1498]: time="2024-10-09T01:27:09.217185929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:27:09.247131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20-rootfs.mount: Deactivated successfully. Oct 9 01:27:09.371731 kubelet[2753]: I1009 01:27:09.371408 2753 scope.go:117] "RemoveContainer" containerID="49e47608226595d01ba5f1017097cde58eb16dde509c3ce9890e90f069de3b20" Oct 9 01:27:09.401734 containerd[1498]: time="2024-10-09T01:27:09.401682061Z" level=info msg="CreateContainer within sandbox \"c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 9 01:27:09.417092 containerd[1498]: time="2024-10-09T01:27:09.416359124Z" level=info msg="CreateContainer within sandbox \"c98db9f15a8a92986d806de64de527d33d754a84252c125c2feed8803a6b9f49\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5cd2616a15bcfa93b939878c818a963b14247c0b88f312df2221c50179c53c20\"" Oct 9 01:27:09.417522 containerd[1498]: time="2024-10-09T01:27:09.417481558Z" level=info msg="StartContainer for \"5cd2616a15bcfa93b939878c818a963b14247c0b88f312df2221c50179c53c20\"" Oct 9 01:27:09.453143 systemd[1]: Started cri-containerd-5cd2616a15bcfa93b939878c818a963b14247c0b88f312df2221c50179c53c20.scope - libcontainer container 5cd2616a15bcfa93b939878c818a963b14247c0b88f312df2221c50179c53c20. Oct 9 01:27:09.479212 containerd[1498]: time="2024-10-09T01:27:09.479143509Z" level=info msg="StartContainer for \"5cd2616a15bcfa93b939878c818a963b14247c0b88f312df2221c50179c53c20\" returns successfully" Oct 9 01:27:12.783547 kubelet[2753]: E1009 01:27:12.783500 2753 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35608->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4116-0-0-2-50096a0261.17fca47efd70dd3a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4116-0-0-2-50096a0261,UID:d79132f28a65594a69a939efae1f50c7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-2-50096a0261,},FirstTimestamp:2024-10-09 01:27:02.328294714 +0000 UTC m=+1059.372957317,LastTimestamp:2024-10-09 01:27:02.328294714 +0000 UTC m=+1059.372957317,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-2-50096a0261,}" Oct 9 01:27:14.295743 systemd[1]: cri-containerd-792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2.scope: Deactivated successfully. Oct 9 01:27:14.296019 systemd[1]: cri-containerd-792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2.scope: Consumed 2.894s CPU time, 15.6M memory peak, 0B memory swap peak. Oct 9 01:27:14.320940 containerd[1498]: time="2024-10-09T01:27:14.320885067Z" level=info msg="shim disconnected" id=792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2 namespace=k8s.io Oct 9 01:27:14.321374 containerd[1498]: time="2024-10-09T01:27:14.321352951Z" level=warning msg="cleaning up after shim disconnected" id=792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2 namespace=k8s.io Oct 9 01:27:14.321451 containerd[1498]: time="2024-10-09T01:27:14.321435014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:27:14.324640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2-rootfs.mount: Deactivated successfully. Oct 9 01:27:14.388982 kubelet[2753]: I1009 01:27:14.388951 2753 scope.go:117] "RemoveContainer" containerID="792f0f6191b90df50faf9cf97c70aa3eb2e06bb0fd60d96adea101f92a2e89c2" Oct 9 01:27:14.390682 containerd[1498]: time="2024-10-09T01:27:14.390648569Z" level=info msg="CreateContainer within sandbox \"00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 9 01:27:14.417392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067290279.mount: Deactivated successfully. Oct 9 01:27:14.432124 containerd[1498]: time="2024-10-09T01:27:14.432078698Z" level=info msg="CreateContainer within sandbox \"00498e5d970a6219d93f67c6080af28b84d2dfa3f70f72d9b81a47213ebdfe7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"71ecb57c91e1e6e983bc358f62c6371e91aef93e0c580a6e1bcc0d170c76097d\"" Oct 9 01:27:14.432603 containerd[1498]: time="2024-10-09T01:27:14.432557752Z" level=info msg="StartContainer for \"71ecb57c91e1e6e983bc358f62c6371e91aef93e0c580a6e1bcc0d170c76097d\"" Oct 9 01:27:14.465149 systemd[1]: Started cri-containerd-71ecb57c91e1e6e983bc358f62c6371e91aef93e0c580a6e1bcc0d170c76097d.scope - libcontainer container 71ecb57c91e1e6e983bc358f62c6371e91aef93e0c580a6e1bcc0d170c76097d. Oct 9 01:27:14.499838 containerd[1498]: time="2024-10-09T01:27:14.499727132Z" level=info msg="StartContainer for \"71ecb57c91e1e6e983bc358f62c6371e91aef93e0c580a6e1bcc0d170c76097d\" returns successfully"