Oct 9 03:18:52.931747 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 03:18:52.931770 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 03:18:52.931778 kernel: BIOS-provided physical RAM map: Oct 9 03:18:52.931784 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 03:18:52.931789 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 03:18:52.931794 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 03:18:52.931800 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 9 03:18:52.931805 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 9 03:18:52.931812 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 03:18:52.931817 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 03:18:52.931823 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 03:18:52.931828 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 03:18:52.931833 kernel: NX (Execute Disable) protection: active Oct 9 03:18:52.931838 kernel: APIC: Static calls initialized Oct 9 03:18:52.931846 kernel: SMBIOS 2.8 present. Oct 9 03:18:52.931852 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 9 03:18:52.931858 kernel: Hypervisor detected: KVM Oct 9 03:18:52.931863 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 03:18:52.931868 kernel: kvm-clock: using sched offset of 2656419553 cycles Oct 9 03:18:52.931875 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 03:18:52.931880 kernel: tsc: Detected 2445.402 MHz processor Oct 9 03:18:52.931886 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 03:18:52.931892 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 03:18:52.931929 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 9 03:18:52.931935 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 03:18:52.931941 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 03:18:52.931947 kernel: Using GB pages for direct mapping Oct 9 03:18:52.931952 kernel: ACPI: Early table checksum verification disabled Oct 9 03:18:52.931958 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 9 03:18:52.931963 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.931969 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.931974 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.931983 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 9 03:18:52.931988 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.931994 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.931999 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.932005 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 03:18:52.932011 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 9 03:18:52.932016 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 9 03:18:52.932022 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 9 03:18:52.932033 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 9 03:18:52.932039 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 9 03:18:52.932044 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 9 03:18:52.932050 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 9 03:18:52.932056 kernel: No NUMA configuration found Oct 9 03:18:52.932062 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 9 03:18:52.932069 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 9 03:18:52.932075 kernel: Zone ranges: Oct 9 03:18:52.932081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 03:18:52.932087 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 9 03:18:52.932093 kernel: Normal empty Oct 9 03:18:52.932098 kernel: Movable zone start for each node Oct 9 03:18:52.932104 kernel: Early memory node ranges Oct 9 03:18:52.932110 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 03:18:52.932116 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 9 03:18:52.932121 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 9 03:18:52.932129 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 03:18:52.932135 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 03:18:52.932141 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 03:18:52.932147 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 03:18:52.932153 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 03:18:52.932159 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 03:18:52.932165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 03:18:52.932171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 03:18:52.932176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 03:18:52.932184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 03:18:52.932190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 03:18:52.932196 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 03:18:52.932202 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 03:18:52.932208 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 03:18:52.932214 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 03:18:52.932219 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 03:18:52.932225 kernel: Booting paravirtualized kernel on KVM Oct 9 03:18:52.932231 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 03:18:52.932239 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 03:18:52.932245 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 03:18:52.932251 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 03:18:52.932257 kernel: pcpu-alloc: [0] 0 1 Oct 9 03:18:52.932262 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 03:18:52.932269 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 03:18:52.932276 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 03:18:52.932281 kernel: random: crng init done Oct 9 03:18:52.932289 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 03:18:52.932295 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 03:18:52.932301 kernel: Fallback order for Node 0: 0 Oct 9 03:18:52.932307 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 9 03:18:52.932313 kernel: Policy zone: DMA32 Oct 9 03:18:52.932319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 03:18:52.932325 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125148K reserved, 0K cma-reserved) Oct 9 03:18:52.932331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 03:18:52.932337 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 03:18:52.932345 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 03:18:52.932351 kernel: Dynamic Preempt: voluntary Oct 9 03:18:52.932357 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 03:18:52.932363 kernel: rcu: RCU event tracing is enabled. Oct 9 03:18:52.932369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 03:18:52.932375 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 03:18:52.932382 kernel: Rude variant of Tasks RCU enabled. Oct 9 03:18:52.932387 kernel: Tracing variant of Tasks RCU enabled. Oct 9 03:18:52.932393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 03:18:52.932401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 03:18:52.932407 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 03:18:52.932413 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 03:18:52.932419 kernel: Console: colour VGA+ 80x25 Oct 9 03:18:52.932424 kernel: printk: console [tty0] enabled Oct 9 03:18:52.932430 kernel: printk: console [ttyS0] enabled Oct 9 03:18:52.932436 kernel: ACPI: Core revision 20230628 Oct 9 03:18:52.932442 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 03:18:52.932448 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 03:18:52.932454 kernel: x2apic enabled Oct 9 03:18:52.932462 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 03:18:52.932468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 03:18:52.932474 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 03:18:52.932479 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445402) Oct 9 03:18:52.932485 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 03:18:52.932491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 03:18:52.932497 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 03:18:52.932503 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 03:18:52.932518 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 03:18:52.932524 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 03:18:52.932530 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 03:18:52.932538 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 03:18:52.932545 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 03:18:52.932551 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 03:18:52.932557 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 03:18:52.932563 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 03:18:52.932570 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 03:18:52.932576 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 03:18:52.932583 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 03:18:52.932591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 03:18:52.932597 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 03:18:52.932603 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 03:18:52.932609 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 03:18:52.932615 kernel: Freeing SMP alternatives memory: 32K Oct 9 03:18:52.932623 kernel: pid_max: default: 32768 minimum: 301 Oct 9 03:18:52.932630 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 03:18:52.932636 kernel: landlock: Up and running. Oct 9 03:18:52.932642 kernel: SELinux: Initializing. Oct 9 03:18:52.932648 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 03:18:52.932654 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 03:18:52.932661 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 03:18:52.932669 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 03:18:52.932680 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 03:18:52.932696 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 03:18:52.932707 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 03:18:52.932719 kernel: ... version: 0 Oct 9 03:18:52.932730 kernel: ... bit width: 48 Oct 9 03:18:52.932737 kernel: ... generic registers: 6 Oct 9 03:18:52.932743 kernel: ... value mask: 0000ffffffffffff Oct 9 03:18:52.932750 kernel: ... max period: 00007fffffffffff Oct 9 03:18:52.932756 kernel: ... fixed-purpose events: 0 Oct 9 03:18:52.932762 kernel: ... event mask: 000000000000003f Oct 9 03:18:52.932771 kernel: signal: max sigframe size: 1776 Oct 9 03:18:52.932777 kernel: rcu: Hierarchical SRCU implementation. Oct 9 03:18:52.932783 kernel: rcu: Max phase no-delay instances is 400. Oct 9 03:18:52.932790 kernel: smp: Bringing up secondary CPUs ... Oct 9 03:18:52.932796 kernel: smpboot: x86: Booting SMP configuration: Oct 9 03:18:52.932803 kernel: .... node #0, CPUs: #1 Oct 9 03:18:52.932809 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 03:18:52.932815 kernel: smpboot: Max logical packages: 1 Oct 9 03:18:52.932821 kernel: smpboot: Total of 2 processors activated (9781.60 BogoMIPS) Oct 9 03:18:52.932828 kernel: devtmpfs: initialized Oct 9 03:18:52.932836 kernel: x86/mm: Memory block size: 128MB Oct 9 03:18:52.932842 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 03:18:52.932848 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 03:18:52.932854 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 03:18:52.932860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 03:18:52.932867 kernel: audit: initializing netlink subsys (disabled) Oct 9 03:18:52.932873 kernel: audit: type=2000 audit(1728443932.747:1): state=initialized audit_enabled=0 res=1 Oct 9 03:18:52.932879 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 03:18:52.932886 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 03:18:52.932894 kernel: cpuidle: using governor menu Oct 9 03:18:52.932929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 03:18:52.932936 kernel: dca service started, version 1.12.1 Oct 9 03:18:52.932942 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 03:18:52.932949 kernel: PCI: Using configuration type 1 for base access Oct 9 03:18:52.932955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 03:18:52.932961 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 03:18:52.932968 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 03:18:52.932977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 03:18:52.932983 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 03:18:52.932989 kernel: ACPI: Added _OSI(Module Device) Oct 9 03:18:52.932996 kernel: ACPI: Added _OSI(Processor Device) Oct 9 03:18:52.933002 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 03:18:52.933008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 03:18:52.933014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 03:18:52.933021 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 03:18:52.933027 kernel: ACPI: Interpreter enabled Oct 9 03:18:52.933033 kernel: ACPI: PM: (supports S0 S5) Oct 9 03:18:52.933041 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 03:18:52.933047 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 03:18:52.933054 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 03:18:52.933060 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 03:18:52.933067 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 03:18:52.933241 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 03:18:52.933359 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 03:18:52.933471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 03:18:52.933480 kernel: PCI host bridge to bus 0000:00 Oct 9 03:18:52.933589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 03:18:52.933704 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 03:18:52.933822 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 03:18:52.933945 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 9 03:18:52.934044 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 03:18:52.934143 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 03:18:52.934238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 03:18:52.934358 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 03:18:52.934473 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 9 03:18:52.934578 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 9 03:18:52.934700 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 9 03:18:52.934841 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 9 03:18:52.937046 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 9 03:18:52.937165 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 03:18:52.937283 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.937389 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 9 03:18:52.937502 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.937606 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 9 03:18:52.937759 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.937873 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 9 03:18:52.939036 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.939151 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 9 03:18:52.939287 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.939395 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 9 03:18:52.939514 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.939618 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 9 03:18:52.939759 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.939869 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 9 03:18:52.941036 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.941152 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 9 03:18:52.941274 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 9 03:18:52.941381 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 9 03:18:52.941494 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 03:18:52.941598 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 03:18:52.941735 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 03:18:52.941846 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 9 03:18:52.942038 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 9 03:18:52.942157 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 03:18:52.942306 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 03:18:52.942426 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 03:18:52.942534 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 9 03:18:52.942641 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 9 03:18:52.942793 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 9 03:18:52.945165 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 03:18:52.945287 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 03:18:52.945401 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 03:18:52.945539 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 9 03:18:52.945652 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 9 03:18:52.945799 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 03:18:52.945934 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 03:18:52.946042 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 03:18:52.946161 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 9 03:18:52.946311 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 9 03:18:52.946485 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 9 03:18:52.946600 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 03:18:52.946760 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 03:18:52.946876 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 03:18:52.947058 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 9 03:18:52.947175 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 9 03:18:52.948149 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 03:18:52.948267 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 03:18:52.949063 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 03:18:52.949196 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 9 03:18:52.949314 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 9 03:18:52.949440 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 03:18:52.949550 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 03:18:52.949661 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 03:18:52.949826 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 9 03:18:52.950993 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 9 03:18:52.951118 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 9 03:18:52.951225 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 03:18:52.951335 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 03:18:52.951444 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 03:18:52.951455 kernel: acpiphp: Slot [0] registered Oct 9 03:18:52.951574 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 03:18:52.951746 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 9 03:18:52.951931 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 9 03:18:52.952070 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 9 03:18:52.952180 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 03:18:52.952290 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 03:18:52.952404 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 03:18:52.952414 kernel: acpiphp: Slot [0-2] registered Oct 9 03:18:52.952516 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 03:18:52.952618 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 03:18:52.952747 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 03:18:52.952758 kernel: acpiphp: Slot [0-3] registered Oct 9 03:18:52.952864 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 03:18:52.953375 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 03:18:52.953503 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 03:18:52.953514 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 03:18:52.953521 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 03:18:52.953528 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 03:18:52.953534 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 03:18:52.953540 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 03:18:52.953546 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 03:18:52.953553 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 03:18:52.953563 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 03:18:52.953569 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 03:18:52.953575 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 03:18:52.953582 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 03:18:52.953588 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 03:18:52.953594 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 03:18:52.953600 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 03:18:52.953606 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 03:18:52.953613 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 03:18:52.953621 kernel: iommu: Default domain type: Translated Oct 9 03:18:52.953628 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 03:18:52.953634 kernel: PCI: Using ACPI for IRQ routing Oct 9 03:18:52.953640 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 03:18:52.953646 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 03:18:52.953653 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 9 03:18:52.953796 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 03:18:52.953972 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 03:18:52.954083 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 03:18:52.954098 kernel: vgaarb: loaded Oct 9 03:18:52.954111 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 03:18:52.954121 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 03:18:52.954129 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 03:18:52.954141 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 03:18:52.954148 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 03:18:52.954155 kernel: pnp: PnP ACPI init Oct 9 03:18:52.954282 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 03:18:52.954297 kernel: pnp: PnP ACPI: found 5 devices Oct 9 03:18:52.954304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 03:18:52.954311 kernel: NET: Registered PF_INET protocol family Oct 9 03:18:52.954317 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 03:18:52.954324 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 03:18:52.954330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 03:18:52.954337 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 03:18:52.954343 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 03:18:52.954349 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 03:18:52.954358 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 03:18:52.954364 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 03:18:52.954371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 03:18:52.954377 kernel: NET: Registered PF_XDP protocol family Oct 9 03:18:52.954481 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 9 03:18:52.954586 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 9 03:18:52.954736 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 9 03:18:52.954869 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 9 03:18:52.955010 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 9 03:18:52.955117 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 9 03:18:52.955219 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 03:18:52.955323 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 03:18:52.955426 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 03:18:52.955542 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 03:18:52.955693 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 03:18:52.955831 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 03:18:52.955993 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 03:18:52.956100 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 03:18:52.956203 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 03:18:52.956304 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 03:18:52.956405 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 03:18:52.956506 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 03:18:52.956619 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 03:18:52.956800 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 03:18:52.956968 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 03:18:52.957078 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 03:18:52.957181 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 03:18:52.957283 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 03:18:52.957387 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 03:18:52.957489 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 9 03:18:52.957591 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 03:18:52.957694 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 03:18:52.957802 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 03:18:52.957964 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 9 03:18:52.958074 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 03:18:52.958178 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 03:18:52.958281 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 03:18:52.958399 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 9 03:18:52.958516 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 03:18:52.958625 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 03:18:52.958804 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 03:18:52.958965 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 03:18:52.959072 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 03:18:52.959168 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 9 03:18:52.959272 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 03:18:52.959379 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 03:18:52.959493 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 9 03:18:52.959595 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 03:18:52.959702 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 9 03:18:52.959808 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 03:18:52.960005 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 9 03:18:52.960113 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 03:18:52.960219 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 9 03:18:52.960319 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 03:18:52.960424 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 9 03:18:52.960529 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 03:18:52.960637 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 9 03:18:52.960737 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 03:18:52.960849 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 9 03:18:52.961001 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 9 03:18:52.961116 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 03:18:52.961226 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 9 03:18:52.961332 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 9 03:18:52.961431 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 03:18:52.961538 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 9 03:18:52.961649 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 9 03:18:52.961751 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 03:18:52.961761 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 03:18:52.961768 kernel: PCI: CLS 0 bytes, default 64 Oct 9 03:18:52.961779 kernel: Initialise system trusted keyrings Oct 9 03:18:52.961786 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 03:18:52.961792 kernel: Key type asymmetric registered Oct 9 03:18:52.961799 kernel: Asymmetric key parser 'x509' registered Oct 9 03:18:52.961805 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 03:18:52.961812 kernel: io scheduler mq-deadline registered Oct 9 03:18:52.961819 kernel: io scheduler kyber registered Oct 9 03:18:52.961825 kernel: io scheduler bfq registered Oct 9 03:18:52.961983 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 9 03:18:52.962095 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 9 03:18:52.962212 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 9 03:18:52.962333 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 9 03:18:52.962437 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 9 03:18:52.962539 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 9 03:18:52.962641 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 9 03:18:52.962812 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 9 03:18:52.962954 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 9 03:18:52.963081 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 9 03:18:52.963187 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 9 03:18:52.963291 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 9 03:18:52.963405 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 9 03:18:52.963511 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 9 03:18:52.963624 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 9 03:18:52.963730 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 9 03:18:52.963740 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 03:18:52.963842 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 9 03:18:52.964003 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 9 03:18:52.964014 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 03:18:52.964025 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 9 03:18:52.964032 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 03:18:52.964039 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 03:18:52.964046 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 03:18:52.964052 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 03:18:52.964059 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 03:18:52.964066 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 03:18:52.964177 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 03:18:52.964276 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 03:18:52.964384 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T03:18:52 UTC (1728443932) Oct 9 03:18:52.964482 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 03:18:52.964492 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 03:18:52.964498 kernel: NET: Registered PF_INET6 protocol family Oct 9 03:18:52.964505 kernel: Segment Routing with IPv6 Oct 9 03:18:52.964515 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 03:18:52.964522 kernel: NET: Registered PF_PACKET protocol family Oct 9 03:18:52.964529 kernel: Key type dns_resolver registered Oct 9 03:18:52.964535 kernel: IPI shorthand broadcast: enabled Oct 9 03:18:52.964542 kernel: sched_clock: Marking stable (1106010878, 132580785)->(1247327854, -8736191) Oct 9 03:18:52.964549 kernel: registered taskstats version 1 Oct 9 03:18:52.964555 kernel: Loading compiled-in X.509 certificates Oct 9 03:18:52.964562 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 03:18:52.964569 kernel: Key type .fscrypt registered Oct 9 03:18:52.964577 kernel: Key type fscrypt-provisioning registered Oct 9 03:18:52.964583 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 03:18:52.964590 kernel: ima: Allocated hash algorithm: sha1 Oct 9 03:18:52.964596 kernel: ima: No architecture policies found Oct 9 03:18:52.964603 kernel: clk: Disabling unused clocks Oct 9 03:18:52.964609 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 03:18:52.964616 kernel: Write protecting the kernel read-only data: 36864k Oct 9 03:18:52.964622 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 03:18:52.964629 kernel: Run /init as init process Oct 9 03:18:52.964637 kernel: with arguments: Oct 9 03:18:52.964644 kernel: /init Oct 9 03:18:52.964651 kernel: with environment: Oct 9 03:18:52.964657 kernel: HOME=/ Oct 9 03:18:52.964663 kernel: TERM=linux Oct 9 03:18:52.964670 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 03:18:52.964678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 03:18:52.964687 systemd[1]: Detected virtualization kvm. Oct 9 03:18:52.964696 systemd[1]: Detected architecture x86-64. Oct 9 03:18:52.964703 systemd[1]: Running in initrd. Oct 9 03:18:52.964709 systemd[1]: No hostname configured, using default hostname. Oct 9 03:18:52.964716 systemd[1]: Hostname set to . Oct 9 03:18:52.964723 systemd[1]: Initializing machine ID from VM UUID. Oct 9 03:18:52.964730 systemd[1]: Queued start job for default target initrd.target. Oct 9 03:18:52.964737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 03:18:52.964744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 03:18:52.964754 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 03:18:52.964761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 03:18:52.964768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 03:18:52.964775 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 03:18:52.964783 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 03:18:52.964790 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 03:18:52.964797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 03:18:52.964806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 03:18:52.964813 systemd[1]: Reached target paths.target - Path Units. Oct 9 03:18:52.964820 systemd[1]: Reached target slices.target - Slice Units. Oct 9 03:18:52.964826 systemd[1]: Reached target swap.target - Swaps. Oct 9 03:18:52.964833 systemd[1]: Reached target timers.target - Timer Units. Oct 9 03:18:52.964840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 03:18:52.964847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 03:18:52.964854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 03:18:52.964863 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 03:18:52.964870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 03:18:52.964877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 03:18:52.964884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 03:18:52.964891 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 03:18:52.964924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 03:18:52.964932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 03:18:52.964939 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 03:18:52.964946 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 03:18:52.964956 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 03:18:52.964963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 03:18:52.964970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:18:52.964977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 03:18:52.965003 systemd-journald[187]: Collecting audit messages is disabled. Oct 9 03:18:52.965023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 03:18:52.965030 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 03:18:52.965038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 03:18:52.965047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 03:18:52.965054 kernel: Bridge firewalling registered Oct 9 03:18:52.965061 systemd-journald[187]: Journal started Oct 9 03:18:52.965076 systemd-journald[187]: Runtime Journal (/run/log/journal/5e64192b24fd470ea392a5d95dc5d54e) is 4.8M, max 38.4M, 33.6M free. Oct 9 03:18:52.919663 systemd-modules-load[188]: Inserted module 'overlay' Oct 9 03:18:52.952934 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 9 03:18:52.990926 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 03:18:52.992366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 03:18:52.994345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:18:53.003053 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 03:18:53.005066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 03:18:53.007252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 03:18:53.012162 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 03:18:53.018078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 03:18:53.024798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 03:18:53.029193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 03:18:53.029824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 03:18:53.036046 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 03:18:53.040032 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 03:18:53.041232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 03:18:53.046075 dracut-cmdline[218]: dracut-dracut-053 Oct 9 03:18:53.048946 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 03:18:53.076313 systemd-resolved[220]: Positive Trust Anchors: Oct 9 03:18:53.076325 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 03:18:53.076351 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 03:18:53.082885 systemd-resolved[220]: Defaulting to hostname 'linux'. Oct 9 03:18:53.083927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 03:18:53.084465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 03:18:53.117923 kernel: SCSI subsystem initialized Oct 9 03:18:53.125945 kernel: Loading iSCSI transport class v2.0-870. Oct 9 03:18:53.135929 kernel: iscsi: registered transport (tcp) Oct 9 03:18:53.154057 kernel: iscsi: registered transport (qla4xxx) Oct 9 03:18:53.154113 kernel: QLogic iSCSI HBA Driver Oct 9 03:18:53.199152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 03:18:53.206035 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 03:18:53.229324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 03:18:53.229382 kernel: device-mapper: uevent: version 1.0.3 Oct 9 03:18:53.229397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 03:18:53.269926 kernel: raid6: avx2x4 gen() 35099 MB/s Oct 9 03:18:53.286922 kernel: raid6: avx2x2 gen() 31413 MB/s Oct 9 03:18:53.304007 kernel: raid6: avx2x1 gen() 26115 MB/s Oct 9 03:18:53.304058 kernel: raid6: using algorithm avx2x4 gen() 35099 MB/s Oct 9 03:18:53.322096 kernel: raid6: .... xor() 4614 MB/s, rmw enabled Oct 9 03:18:53.322126 kernel: raid6: using avx2x2 recovery algorithm Oct 9 03:18:53.340939 kernel: xor: automatically using best checksumming function avx Oct 9 03:18:53.466940 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 03:18:53.478807 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 03:18:53.488055 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 03:18:53.499184 systemd-udevd[405]: Using default interface naming scheme 'v255'. Oct 9 03:18:53.503048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 03:18:53.511062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 03:18:53.523121 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Oct 9 03:18:53.552943 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 03:18:53.558025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 03:18:53.620200 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 03:18:53.631927 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 03:18:53.640891 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 03:18:53.643631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 03:18:53.645698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 03:18:53.646734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 03:18:53.654067 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 03:18:53.664283 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 03:18:53.764944 kernel: ACPI: bus type USB registered Oct 9 03:18:53.764997 kernel: scsi host0: Virtio SCSI HBA Oct 9 03:18:53.769281 kernel: usbcore: registered new interface driver usbfs Oct 9 03:18:53.769344 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 9 03:18:53.772378 kernel: usbcore: registered new interface driver hub Oct 9 03:18:53.772403 kernel: usbcore: registered new device driver usb Oct 9 03:18:53.772699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 03:18:53.772827 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 03:18:53.775520 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 03:18:53.779171 kernel: libata version 3.00 loaded. Oct 9 03:18:53.779194 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 03:18:53.776194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 03:18:53.776753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:18:53.778520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:18:53.785153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:18:53.811175 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 03:18:53.811216 kernel: AES CTR mode by8 optimization enabled Oct 9 03:18:53.827932 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 03:18:53.840936 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 03:18:53.840965 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 03:18:53.841123 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 03:18:53.855415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:18:53.865340 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 03:18:53.865553 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 9 03:18:53.865723 kernel: scsi host1: ahci Oct 9 03:18:53.868737 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 9 03:18:53.870923 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 03:18:53.871078 kernel: scsi host2: ahci Oct 9 03:18:53.871401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 03:18:53.877606 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 9 03:18:53.877778 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 9 03:18:53.877937 kernel: hub 1-0:1.0: USB hub found Oct 9 03:18:53.878095 kernel: scsi host3: ahci Oct 9 03:18:53.878226 kernel: hub 1-0:1.0: 4 ports detected Oct 9 03:18:53.881101 kernel: scsi host4: ahci Oct 9 03:18:53.881167 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 9 03:18:53.882105 kernel: scsi host5: ahci Oct 9 03:18:53.882923 kernel: scsi host6: ahci Oct 9 03:18:53.884531 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 9 03:18:53.884554 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 9 03:18:53.887320 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 9 03:18:53.887342 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 9 03:18:53.889211 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 9 03:18:53.892219 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 9 03:18:53.894487 kernel: hub 2-0:1.0: USB hub found Oct 9 03:18:53.894693 kernel: hub 2-0:1.0: 4 ports detected Oct 9 03:18:53.899549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 03:18:54.117955 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 9 03:18:54.206727 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 9 03:18:54.206803 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 03:18:54.206832 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 03:18:54.206847 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 03:18:54.206858 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 03:18:54.206868 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 03:18:54.207935 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 03:18:54.210225 kernel: ata1.00: applying bridge limits Oct 9 03:18:54.211352 kernel: ata1.00: configured for UDMA/100 Oct 9 03:18:54.212091 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 03:18:54.239013 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 9 03:18:54.241133 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 9 03:18:54.241385 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 9 03:18:54.242303 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 9 03:18:54.242490 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 9 03:18:54.248990 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 03:18:54.249018 kernel: GPT:17805311 != 80003071 Oct 9 03:18:54.249030 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 03:18:54.253167 kernel: GPT:17805311 != 80003071 Oct 9 03:18:54.254917 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 03:18:54.254943 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 03:18:54.259475 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 9 03:18:54.263949 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 03:18:54.269080 kernel: usbcore: registered new interface driver usbhid Oct 9 03:18:54.269109 kernel: usbhid: USB HID core driver Oct 9 03:18:54.271933 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 03:18:54.272129 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 03:18:54.272146 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 9 03:18:54.276478 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 9 03:18:54.286072 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 9 03:18:54.298441 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 9 03:18:54.301374 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (449) Oct 9 03:18:54.301397 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (470) Oct 9 03:18:54.310298 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 9 03:18:54.318056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 03:18:54.323325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 9 03:18:54.324576 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 9 03:18:54.330253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 03:18:54.337590 disk-uuid[574]: Primary Header is updated. Oct 9 03:18:54.337590 disk-uuid[574]: Secondary Entries is updated. Oct 9 03:18:54.337590 disk-uuid[574]: Secondary Header is updated. Oct 9 03:18:54.342935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 03:18:54.350931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 03:18:54.356965 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 03:18:55.358357 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 03:18:55.358421 disk-uuid[576]: The operation has completed successfully. Oct 9 03:18:55.412820 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 03:18:55.412993 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 03:18:55.427029 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 03:18:55.432375 sh[596]: Success Oct 9 03:18:55.446970 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 03:18:55.509666 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 03:18:55.512722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 03:18:55.514155 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 03:18:55.531886 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 03:18:55.531950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 03:18:55.531974 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 03:18:55.534022 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 03:18:55.536015 kernel: BTRFS info (device dm-0): using free space tree Oct 9 03:18:55.543937 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 03:18:55.546050 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 03:18:55.547415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 03:18:55.553071 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 03:18:55.555057 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 03:18:55.568929 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 03:18:55.572371 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 03:18:55.572396 kernel: BTRFS info (device sda6): using free space tree Oct 9 03:18:55.579949 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 03:18:55.579981 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 03:18:55.592219 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 03:18:55.593068 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 03:18:55.598524 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 03:18:55.606085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 03:18:55.664219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 03:18:55.673130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 03:18:55.690742 ignition[700]: Ignition 2.19.0 Oct 9 03:18:55.691895 ignition[700]: Stage: fetch-offline Oct 9 03:18:55.691954 ignition[700]: no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:55.691965 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:55.692046 ignition[700]: parsed url from cmdline: "" Oct 9 03:18:55.692050 ignition[700]: no config URL provided Oct 9 03:18:55.692055 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 03:18:55.692063 ignition[700]: no config at "/usr/lib/ignition/user.ign" Oct 9 03:18:55.692068 ignition[700]: failed to fetch config: resource requires networking Oct 9 03:18:55.692219 ignition[700]: Ignition finished successfully Oct 9 03:18:55.695710 systemd-networkd[777]: lo: Link UP Oct 9 03:18:55.695715 systemd-networkd[777]: lo: Gained carrier Oct 9 03:18:55.698662 systemd-networkd[777]: Enumeration completed Oct 9 03:18:55.698745 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 03:18:55.699528 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 03:18:55.700668 systemd[1]: Reached target network.target - Network. Oct 9 03:18:55.702014 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:18:55.702018 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 03:18:55.703989 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:18:55.703993 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 03:18:55.705968 systemd-networkd[777]: eth0: Link UP Oct 9 03:18:55.705972 systemd-networkd[777]: eth0: Gained carrier Oct 9 03:18:55.705979 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:18:55.708062 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 03:18:55.710155 systemd-networkd[777]: eth1: Link UP Oct 9 03:18:55.710166 systemd-networkd[777]: eth1: Gained carrier Oct 9 03:18:55.710172 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:18:55.721613 ignition[785]: Ignition 2.19.0 Oct 9 03:18:55.721624 ignition[785]: Stage: fetch Oct 9 03:18:55.721770 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:55.721781 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:55.721868 ignition[785]: parsed url from cmdline: "" Oct 9 03:18:55.721872 ignition[785]: no config URL provided Oct 9 03:18:55.721877 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 03:18:55.721885 ignition[785]: no config at "/usr/lib/ignition/user.ign" Oct 9 03:18:55.721922 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 9 03:18:55.722103 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 9 03:18:55.749980 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 03:18:55.875966 systemd-networkd[777]: eth0: DHCPv4 address 188.245.48.63/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 03:18:55.922298 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 9 03:18:55.925495 ignition[785]: GET result: OK Oct 9 03:18:55.925564 ignition[785]: parsing config with SHA512: b4d51ff780218f9f9a671105eb2f4139fea3363d5d64ed3ab92316e19b31c12df8677e37bb83602f02376f0ec496803b85868cad172a5a4ae896d589636fda29 Oct 9 03:18:55.929257 unknown[785]: fetched base config from "system" Oct 9 03:18:55.929269 unknown[785]: fetched base config from "system" Oct 9 03:18:55.929566 ignition[785]: fetch: fetch complete Oct 9 03:18:55.929276 unknown[785]: fetched user config from "hetzner" Oct 9 03:18:55.929572 ignition[785]: fetch: fetch passed Oct 9 03:18:55.929614 ignition[785]: Ignition finished successfully Oct 9 03:18:55.932502 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 03:18:55.938051 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 03:18:55.951404 ignition[792]: Ignition 2.19.0 Oct 9 03:18:55.951419 ignition[792]: Stage: kargs Oct 9 03:18:55.951597 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:55.951608 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:55.953962 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 03:18:55.952281 ignition[792]: kargs: kargs passed Oct 9 03:18:55.952322 ignition[792]: Ignition finished successfully Oct 9 03:18:55.961134 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 03:18:55.975209 ignition[799]: Ignition 2.19.0 Oct 9 03:18:55.975229 ignition[799]: Stage: disks Oct 9 03:18:55.975414 ignition[799]: no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:55.975427 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:55.976365 ignition[799]: disks: disks passed Oct 9 03:18:55.976412 ignition[799]: Ignition finished successfully Oct 9 03:18:55.978636 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 03:18:55.979701 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 03:18:55.980494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 03:18:55.981553 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 03:18:55.982624 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 03:18:55.983536 systemd[1]: Reached target basic.target - Basic System. Oct 9 03:18:55.990022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 03:18:56.004142 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 03:18:56.006545 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 03:18:56.013032 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 03:18:56.095947 kernel: EXT4-fs (sda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 03:18:56.096433 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 03:18:56.097474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 03:18:56.108979 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 03:18:56.112196 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 03:18:56.114139 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 03:18:56.115527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 03:18:56.115553 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 03:18:56.123279 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (816) Oct 9 03:18:56.123311 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 03:18:56.124967 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 03:18:56.127116 kernel: BTRFS info (device sda6): using free space tree Oct 9 03:18:56.130736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 03:18:56.134552 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 03:18:56.134584 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 03:18:56.133557 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 03:18:56.141075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 03:18:56.183457 coreos-metadata[818]: Oct 09 03:18:56.183 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 9 03:18:56.184963 coreos-metadata[818]: Oct 09 03:18:56.184 INFO Fetch successful Oct 9 03:18:56.186949 coreos-metadata[818]: Oct 09 03:18:56.186 INFO wrote hostname ci-4116-0-0-d-cd8c2d08d9 to /sysroot/etc/hostname Oct 9 03:18:56.188966 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 03:18:56.190327 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 03:18:56.195938 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Oct 9 03:18:56.202215 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 03:18:56.206942 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 03:18:56.310359 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 03:18:56.323093 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 03:18:56.326127 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 03:18:56.332950 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 03:18:56.357945 ignition[935]: INFO : Ignition 2.19.0 Oct 9 03:18:56.357945 ignition[935]: INFO : Stage: mount Oct 9 03:18:56.357945 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:56.357945 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:56.361507 ignition[935]: INFO : mount: mount passed Oct 9 03:18:56.361507 ignition[935]: INFO : Ignition finished successfully Oct 9 03:18:56.361595 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 03:18:56.363184 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 03:18:56.369048 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 03:18:56.529072 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 03:18:56.532072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 03:18:56.543929 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) Oct 9 03:18:56.547766 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 03:18:56.547788 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 03:18:56.547798 kernel: BTRFS info (device sda6): using free space tree Oct 9 03:18:56.553601 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 03:18:56.553622 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 03:18:56.555800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 03:18:56.578433 ignition[964]: INFO : Ignition 2.19.0 Oct 9 03:18:56.579162 ignition[964]: INFO : Stage: files Oct 9 03:18:56.579590 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:56.579590 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:56.580930 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Oct 9 03:18:56.581510 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 03:18:56.581510 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 03:18:56.584261 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 03:18:56.584899 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 03:18:56.585698 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 03:18:56.584941 unknown[964]: wrote ssh authorized keys file for user: core Oct 9 03:18:56.587095 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 03:18:56.587095 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 03:18:56.700753 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 03:18:56.922318 systemd-networkd[777]: eth1: Gained IPv6LL Oct 9 03:18:57.020253 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 03:18:57.021651 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 03:18:57.030539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 03:18:57.030539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 03:18:57.030539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 03:18:57.030539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 03:18:57.030539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 03:18:57.050068 systemd-networkd[777]: eth0: Gained IPv6LL Oct 9 03:18:57.651305 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 03:18:58.729914 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 03:18:58.731659 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 03:18:58.741149 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 03:18:58.741149 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 03:18:58.741149 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 9 03:18:58.741149 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 03:18:58.741149 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 03:18:58.741149 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 03:18:58.741149 ignition[964]: INFO : files: files passed Oct 9 03:18:58.741149 ignition[964]: INFO : Ignition finished successfully Oct 9 03:18:58.736041 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 03:18:58.744067 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 03:18:58.750292 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 03:18:58.752424 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 03:18:58.752555 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 03:18:58.776224 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 03:18:58.776224 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 03:18:58.779195 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 03:18:58.781066 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 03:18:58.782293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 03:18:58.788061 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 03:18:58.813003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 03:18:58.813127 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 03:18:58.814671 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 03:18:58.815472 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 03:18:58.816529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 03:18:58.818096 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 03:18:58.838394 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 03:18:58.847129 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 03:18:58.855373 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 03:18:58.856566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 03:18:58.857179 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 03:18:58.858182 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 03:18:58.858290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 03:18:58.859499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 03:18:58.860172 systemd[1]: Stopped target basic.target - Basic System. Oct 9 03:18:58.861128 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 03:18:58.862025 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 03:18:58.862960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 03:18:58.863974 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 03:18:58.864987 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 03:18:58.866048 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 03:18:58.867038 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 03:18:58.868064 systemd[1]: Stopped target swap.target - Swaps. Oct 9 03:18:58.869012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 03:18:58.869116 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 03:18:58.870249 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 03:18:58.870933 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 03:18:58.871820 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 03:18:58.871944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 03:18:58.872944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 03:18:58.873041 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 03:18:58.874423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 03:18:58.874530 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 03:18:58.875213 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 03:18:58.875348 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 03:18:58.876115 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 03:18:58.876244 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 03:18:58.883353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 03:18:58.883815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 03:18:58.884000 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 03:18:58.887056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 03:18:58.887491 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 03:18:58.887591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 03:18:58.888145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 03:18:58.888234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 03:18:58.893250 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 03:18:58.893362 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 03:18:58.903622 ignition[1018]: INFO : Ignition 2.19.0 Oct 9 03:18:58.903622 ignition[1018]: INFO : Stage: umount Oct 9 03:18:58.905952 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 03:18:58.905952 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 03:18:58.905952 ignition[1018]: INFO : umount: umount passed Oct 9 03:18:58.905952 ignition[1018]: INFO : Ignition finished successfully Oct 9 03:18:58.908087 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 03:18:58.908219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 03:18:58.910684 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 03:18:58.910743 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 03:18:58.913094 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 03:18:58.913146 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 03:18:58.913850 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 03:18:58.913894 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 03:18:58.914392 systemd[1]: Stopped target network.target - Network. Oct 9 03:18:58.916966 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 03:18:58.917018 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 03:18:58.917515 systemd[1]: Stopped target paths.target - Path Units. Oct 9 03:18:58.918060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 03:18:58.926046 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 03:18:58.927024 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 03:18:58.930648 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 03:18:58.932976 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 03:18:58.933046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 03:18:58.934770 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 03:18:58.934832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 03:18:58.939711 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 03:18:58.939774 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 03:18:58.940600 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 03:18:58.940646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 03:18:58.941632 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 03:18:58.942683 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 03:18:58.944495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 03:18:58.948022 systemd-networkd[777]: eth1: DHCPv6 lease lost Oct 9 03:18:58.948402 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 03:18:58.948536 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 03:18:58.951447 systemd-networkd[777]: eth0: DHCPv6 lease lost Oct 9 03:18:58.955401 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 03:18:58.955536 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 03:18:58.956408 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 03:18:58.956509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 03:18:58.958328 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 03:18:58.958396 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 03:18:58.959106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 03:18:58.959154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 03:18:58.964030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 03:18:58.964468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 03:18:58.964538 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 03:18:58.965112 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 03:18:58.965160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 03:18:58.965650 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 03:18:58.965695 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 03:18:58.966158 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 03:18:58.966200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 03:18:58.969052 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 03:18:58.981387 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 03:18:58.982071 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 03:18:58.983570 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 03:18:58.983674 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 03:18:58.985417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 03:18:58.985477 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 03:18:58.986599 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 03:18:58.986648 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 03:18:58.987584 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 03:18:58.987634 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 03:18:58.989033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 03:18:58.989080 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 03:18:58.989993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 03:18:58.990053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 03:18:59.001070 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 03:18:59.001541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 03:18:59.001619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 03:18:59.002181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 03:18:59.002232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:18:59.009330 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 03:18:59.009448 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 03:18:59.010968 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 03:18:59.017100 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 03:18:59.023488 systemd[1]: Switching root. Oct 9 03:18:59.055430 systemd-journald[187]: Journal stopped Oct 9 03:19:00.053898 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 9 03:19:00.058464 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 03:19:00.058492 kernel: SELinux: policy capability open_perms=1 Oct 9 03:19:00.058504 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 03:19:00.058513 kernel: SELinux: policy capability always_check_network=0 Oct 9 03:19:00.058522 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 03:19:00.058532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 03:19:00.058542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 03:19:00.058554 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 03:19:00.058571 kernel: audit: type=1403 audit(1728443939.197:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 03:19:00.058591 systemd[1]: Successfully loaded SELinux policy in 55.467ms. Oct 9 03:19:00.058611 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.129ms. Oct 9 03:19:00.058623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 03:19:00.058633 systemd[1]: Detected virtualization kvm. Oct 9 03:19:00.058644 systemd[1]: Detected architecture x86-64. Oct 9 03:19:00.058662 systemd[1]: Detected first boot. Oct 9 03:19:00.058678 systemd[1]: Hostname set to . Oct 9 03:19:00.058722 systemd[1]: Initializing machine ID from VM UUID. Oct 9 03:19:00.058741 zram_generator::config[1060]: No configuration found. Oct 9 03:19:00.058768 systemd[1]: Populated /etc with preset unit settings. Oct 9 03:19:00.058780 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 03:19:00.058790 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 03:19:00.058800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 03:19:00.058811 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 03:19:00.058826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 03:19:00.058841 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 03:19:00.058862 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 03:19:00.058874 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 03:19:00.058885 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 03:19:00.058895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 03:19:00.058925 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 03:19:00.058943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 03:19:00.058961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 03:19:00.058973 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 03:19:00.058985 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 03:19:00.059004 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 03:19:00.059015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 03:19:00.059070 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 03:19:00.059087 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 03:19:00.059108 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 03:19:00.059135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 03:19:00.059161 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 03:19:00.059181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 03:19:00.059200 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 03:19:00.059217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 03:19:00.059233 systemd[1]: Reached target slices.target - Slice Units. Oct 9 03:19:00.059249 systemd[1]: Reached target swap.target - Swaps. Oct 9 03:19:00.059265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 03:19:00.059281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 03:19:00.059296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 03:19:00.059318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 03:19:00.059336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 03:19:00.059353 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 03:19:00.059372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 03:19:00.059389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 03:19:00.059406 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 03:19:00.059425 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:00.059451 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 03:19:00.059473 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 03:19:00.059486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 03:19:00.059497 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 03:19:00.059508 systemd[1]: Reached target machines.target - Containers. Oct 9 03:19:00.059518 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 03:19:00.059529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 03:19:00.059541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 03:19:00.059551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 03:19:00.059561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 03:19:00.059572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 03:19:00.059582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 03:19:00.059592 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 03:19:00.059603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 03:19:00.059621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 03:19:00.059639 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 03:19:00.059655 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 03:19:00.059668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 03:19:00.059678 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 03:19:00.059691 kernel: loop: module loaded Oct 9 03:19:00.059708 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 03:19:00.059726 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 03:19:00.059738 kernel: fuse: init (API version 7.39) Oct 9 03:19:00.059749 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 03:19:00.059760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 03:19:00.059774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 03:19:00.059808 systemd-journald[1140]: Collecting audit messages is disabled. Oct 9 03:19:00.059840 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 03:19:00.059860 systemd[1]: Stopped verity-setup.service. Oct 9 03:19:00.059877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:00.059893 systemd-journald[1140]: Journal started Oct 9 03:19:00.060166 systemd-journald[1140]: Runtime Journal (/run/log/journal/5e64192b24fd470ea392a5d95dc5d54e) is 4.8M, max 38.4M, 33.6M free. Oct 9 03:19:00.067126 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 03:18:59.804774 systemd[1]: Queued start job for default target multi-user.target. Oct 9 03:18:59.823010 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 9 03:18:59.823440 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 03:19:00.075081 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 03:19:00.075131 kernel: ACPI: bus type drm_connector registered Oct 9 03:19:00.072153 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 03:19:00.073503 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 03:19:00.075501 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 03:19:00.076209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 03:19:00.076863 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 03:19:00.077621 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 03:19:00.078575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 03:19:00.079518 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 03:19:00.079739 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 03:19:00.080575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 03:19:00.080789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 03:19:00.081971 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 03:19:00.082190 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 03:19:00.083005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 03:19:00.083208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 03:19:00.084112 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 03:19:00.084321 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 03:19:00.085118 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 03:19:00.085359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 03:19:00.086403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 03:19:00.087262 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 03:19:00.088106 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 03:19:00.101798 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 03:19:00.108850 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 03:19:00.114372 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 03:19:00.114981 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 03:19:00.115066 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 03:19:00.116359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 03:19:00.125710 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 03:19:00.130021 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 03:19:00.130611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 03:19:00.136040 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 03:19:00.142029 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 03:19:00.142558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 03:19:00.151064 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 03:19:00.151664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 03:19:00.156207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 03:19:00.161034 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 03:19:00.166040 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 03:19:00.169510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 03:19:00.170137 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 03:19:00.171193 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 03:19:00.186154 systemd-journald[1140]: Time spent on flushing to /var/log/journal/5e64192b24fd470ea392a5d95dc5d54e is 53.626ms for 1134 entries. Oct 9 03:19:00.186154 systemd-journald[1140]: System Journal (/var/log/journal/5e64192b24fd470ea392a5d95dc5d54e) is 8.0M, max 584.8M, 576.8M free. Oct 9 03:19:00.275735 systemd-journald[1140]: Received client request to flush runtime journal. Oct 9 03:19:00.277578 kernel: loop0: detected capacity change from 0 to 8 Oct 9 03:19:00.277613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 03:19:00.196291 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 03:19:00.196873 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 03:19:00.203178 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 03:19:00.242436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 03:19:00.244824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 03:19:00.256090 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 03:19:00.281653 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 03:19:00.290135 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 03:19:00.296056 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 03:19:00.295169 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 03:19:00.296362 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 03:19:00.304337 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 03:19:00.312569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 03:19:00.339179 kernel: loop2: detected capacity change from 0 to 138192 Oct 9 03:19:00.349996 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 9 03:19:00.350365 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Oct 9 03:19:00.356670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 03:19:00.385923 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 03:19:00.427944 kernel: loop4: detected capacity change from 0 to 8 Oct 9 03:19:00.435933 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 03:19:00.460993 kernel: loop6: detected capacity change from 0 to 138192 Oct 9 03:19:00.482932 kernel: loop7: detected capacity change from 0 to 140992 Oct 9 03:19:00.503276 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 9 03:19:00.504366 (sd-merge)[1205]: Merged extensions into '/usr'. Oct 9 03:19:00.510442 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 03:19:00.510541 systemd[1]: Reloading... Oct 9 03:19:00.623579 zram_generator::config[1234]: No configuration found. Oct 9 03:19:00.623658 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 03:19:00.722262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 03:19:00.763418 systemd[1]: Reloading finished in 252 ms. Oct 9 03:19:00.793603 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 03:19:00.794602 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 03:19:00.795355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 03:19:00.804083 systemd[1]: Starting ensure-sysext.service... Oct 9 03:19:00.805672 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 03:19:00.808075 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 03:19:00.816976 systemd[1]: Reloading requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Oct 9 03:19:00.816988 systemd[1]: Reloading... Oct 9 03:19:00.832770 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 03:19:00.834494 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 03:19:00.835386 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 03:19:00.835676 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Oct 9 03:19:00.835746 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Oct 9 03:19:00.843282 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 03:19:00.843292 systemd-tmpfiles[1276]: Skipping /boot Oct 9 03:19:00.857192 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 03:19:00.857203 systemd-tmpfiles[1276]: Skipping /boot Oct 9 03:19:00.860425 systemd-udevd[1277]: Using default interface naming scheme 'v255'. Oct 9 03:19:00.906949 zram_generator::config[1308]: No configuration found. Oct 9 03:19:00.976197 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1346) Oct 9 03:19:00.986954 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1346) Oct 9 03:19:01.037188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 03:19:01.075949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 03:19:01.078921 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1332) Oct 9 03:19:01.086313 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 03:19:01.086994 systemd[1]: Reloading finished in 269 ms. Oct 9 03:19:01.087945 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 03:19:01.091918 kernel: ACPI: button: Power Button [PWRF] Oct 9 03:19:01.100979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 03:19:01.101750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 03:19:01.137215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 03:19:01.140964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 03:19:01.143429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 03:19:01.147642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 03:19:01.153499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 03:19:01.163759 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 03:19:01.175447 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 9 03:19:01.180558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.180712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 03:19:01.185410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 03:19:01.189112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 03:19:01.193084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 03:19:01.193626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 03:19:01.206177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 03:19:01.208678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.213118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.213311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 03:19:01.213499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 03:19:01.213621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.217870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.220136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 03:19:01.222461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 03:19:01.224107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 03:19:01.224258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 03:19:01.227349 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 03:19:01.231549 systemd[1]: Finished ensure-sysext.service. Oct 9 03:19:01.234450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 03:19:01.234827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 03:19:01.247928 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 03:19:01.246218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 03:19:01.246378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 03:19:01.249848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 03:19:01.261071 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 03:19:01.261853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 03:19:01.265954 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 03:19:01.272209 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 03:19:01.272619 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 03:19:01.270037 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 03:19:01.281787 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 03:19:01.282001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 03:19:01.284269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 03:19:01.302780 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 03:19:01.303995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 03:19:01.314647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 03:19:01.316444 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 03:19:01.324446 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 03:19:01.343655 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 03:19:01.344999 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 03:19:01.349434 kernel: EDAC MC: Ver: 3.0.0 Oct 9 03:19:01.351918 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 9 03:19:01.354484 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 9 03:19:01.357668 augenrules[1425]: No rules Oct 9 03:19:01.358163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:19:01.359007 kernel: Console: switching to colour dummy device 80x25 Oct 9 03:19:01.363247 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 03:19:01.363287 kernel: [drm] features: -context_init Oct 9 03:19:01.363304 kernel: [drm] number of scanouts: 1 Oct 9 03:19:01.362620 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 03:19:01.362818 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 03:19:01.363927 kernel: [drm] number of cap sets: 0 Oct 9 03:19:01.364968 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 9 03:19:01.366082 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 03:19:01.374066 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 03:19:01.374133 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 03:19:01.379029 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 03:19:01.383924 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 03:19:01.404954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 03:19:01.405980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:19:01.421176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:19:01.444596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 03:19:01.444999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:19:01.455047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 03:19:01.481748 systemd-networkd[1381]: lo: Link UP Oct 9 03:19:01.481755 systemd-networkd[1381]: lo: Gained carrier Oct 9 03:19:01.488216 systemd-networkd[1381]: Enumeration completed Oct 9 03:19:01.488611 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 03:19:01.489180 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:19:01.489184 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 03:19:01.494631 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:19:01.494638 systemd-networkd[1381]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 03:19:01.495186 systemd-networkd[1381]: eth0: Link UP Oct 9 03:19:01.495190 systemd-networkd[1381]: eth0: Gained carrier Oct 9 03:19:01.495201 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:19:01.500051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 03:19:01.500191 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 03:19:01.500518 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 03:19:01.500873 systemd-networkd[1381]: eth1: Link UP Oct 9 03:19:01.500880 systemd-networkd[1381]: eth1: Gained carrier Oct 9 03:19:01.500892 systemd-networkd[1381]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 03:19:01.501281 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 03:19:01.503176 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 03:19:01.510723 systemd-resolved[1382]: Positive Trust Anchors: Oct 9 03:19:01.511014 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 03:19:01.511086 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 03:19:01.514585 systemd-resolved[1382]: Using system hostname 'ci-4116-0-0-d-cd8c2d08d9'. Oct 9 03:19:01.516985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 03:19:01.518517 systemd[1]: Reached target network.target - Network. Oct 9 03:19:01.518578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 03:19:01.527893 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 03:19:01.550955 systemd-networkd[1381]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 03:19:01.553074 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 03:19:01.553127 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 9 03:19:01.555224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 03:19:01.560127 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 03:19:01.563634 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 03:19:01.564411 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 03:19:01.564651 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 03:19:01.564761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 03:19:01.565182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 03:19:01.567945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 03:19:01.568403 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 03:19:01.568527 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 03:19:01.569057 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 03:19:01.569088 systemd[1]: Reached target paths.target - Path Units. Oct 9 03:19:01.569561 systemd[1]: Reached target timers.target - Timer Units. Oct 9 03:19:01.576425 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 03:19:01.578952 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 03:19:01.586411 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 03:19:01.588820 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 03:19:01.589395 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 03:19:01.589809 systemd[1]: Reached target basic.target - Basic System. Oct 9 03:19:01.590349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 03:19:01.590374 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 03:19:01.592749 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 03:19:01.598870 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 03:19:01.611364 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 03:19:01.614020 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 03:19:01.619982 systemd-networkd[1381]: eth0: DHCPv4 address 188.245.48.63/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 03:19:01.622283 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 9 03:19:01.625404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 03:19:01.628565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 03:19:01.631083 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 03:19:01.637084 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 03:19:01.641052 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 9 03:19:01.643590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 03:19:01.651081 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 03:19:01.658080 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 03:19:01.659437 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 03:19:01.659872 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 03:19:01.660210 dbus-daemon[1462]: [system] SELinux support is enabled Oct 9 03:19:01.660602 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 03:19:01.665656 jq[1465]: false Oct 9 03:19:01.670987 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 03:19:01.672572 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 03:19:01.673382 coreos-metadata[1461]: Oct 09 03:19:01.673 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 9 03:19:01.675080 coreos-metadata[1461]: Oct 09 03:19:01.674 INFO Fetch successful Oct 9 03:19:01.675080 coreos-metadata[1461]: Oct 09 03:19:01.674 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 9 03:19:01.675332 coreos-metadata[1461]: Oct 09 03:19:01.675 INFO Fetch successful Oct 9 03:19:01.675961 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 03:19:01.684141 extend-filesystems[1466]: Found loop4 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found loop5 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found loop6 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found loop7 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda1 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda2 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda3 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found usr Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda4 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda6 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda7 Oct 9 03:19:01.695350 extend-filesystems[1466]: Found sda9 Oct 9 03:19:01.695350 extend-filesystems[1466]: Checking size of /dev/sda9 Oct 9 03:19:01.687380 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 03:19:01.687571 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 03:19:01.687880 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 03:19:01.689219 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 03:19:01.731155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 03:19:01.731186 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 03:19:01.732031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 03:19:01.736282 jq[1481]: true Oct 9 03:19:01.732054 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 03:19:01.738878 extend-filesystems[1466]: Resized partition /dev/sda9 Oct 9 03:19:01.745021 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Oct 9 03:19:01.751062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 03:19:01.751303 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 03:19:01.760045 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 9 03:19:01.767966 update_engine[1477]: I20241009 03:19:01.766717 1477 main.cc:92] Flatcar Update Engine starting Oct 9 03:19:01.771814 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 03:19:01.774457 systemd[1]: Started update-engine.service - Update Engine. Oct 9 03:19:01.777054 update_engine[1477]: I20241009 03:19:01.776794 1477 update_check_scheduler.cc:74] Next update check in 8m42s Oct 9 03:19:01.785100 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 03:19:01.794469 tar[1485]: linux-amd64/helm Oct 9 03:19:01.795202 jq[1506]: true Oct 9 03:19:01.864099 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1324) Oct 9 03:19:01.874564 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 03:19:01.876935 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 03:19:01.887491 systemd-logind[1475]: New seat seat0. Oct 9 03:19:01.909224 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (Power Button) Oct 9 03:19:01.909247 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 03:19:01.909433 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 03:19:01.940936 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 9 03:19:01.959933 extend-filesystems[1501]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 9 03:19:01.959933 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 9 03:19:01.959933 extend-filesystems[1501]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 9 03:19:01.971420 extend-filesystems[1466]: Resized filesystem in /dev/sda9 Oct 9 03:19:01.971420 extend-filesystems[1466]: Found sr0 Oct 9 03:19:01.963431 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 03:19:01.991493 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Oct 9 03:19:01.968762 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 03:19:01.969967 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 03:19:01.996187 systemd[1]: Starting sshkeys.service... Oct 9 03:19:02.045255 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 03:19:02.057238 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 03:19:02.098685 coreos-metadata[1541]: Oct 09 03:19:02.098 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 9 03:19:02.101665 coreos-metadata[1541]: Oct 09 03:19:02.101 INFO Fetch successful Oct 9 03:19:02.104871 unknown[1541]: wrote ssh authorized keys file for user: core Oct 9 03:19:02.128981 containerd[1494]: time="2024-10-09T03:19:02.128711065Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 03:19:02.141219 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Oct 9 03:19:02.143999 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 03:19:02.149369 systemd[1]: Finished sshkeys.service. Oct 9 03:19:02.158960 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 03:19:02.183914 containerd[1494]: time="2024-10-09T03:19:02.182390577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.191242 containerd[1494]: time="2024-10-09T03:19:02.191199288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 03:19:02.191368 containerd[1494]: time="2024-10-09T03:19:02.191346014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 03:19:02.191468 containerd[1494]: time="2024-10-09T03:19:02.191444799Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 03:19:02.191762 containerd[1494]: time="2024-10-09T03:19:02.191740574Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 03:19:02.191837 containerd[1494]: time="2024-10-09T03:19:02.191823019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.192003 containerd[1494]: time="2024-10-09T03:19:02.191984542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 03:19:02.192948 containerd[1494]: time="2024-10-09T03:19:02.192932000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.193187 containerd[1494]: time="2024-10-09T03:19:02.193168253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 03:19:02.193239 containerd[1494]: time="2024-10-09T03:19:02.193226744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.193286 containerd[1494]: time="2024-10-09T03:19:02.193274212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 03:19:02.193340 containerd[1494]: time="2024-10-09T03:19:02.193327923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.193475 containerd[1494]: time="2024-10-09T03:19:02.193460161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.195527 containerd[1494]: time="2024-10-09T03:19:02.195202572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 03:19:02.195527 containerd[1494]: time="2024-10-09T03:19:02.195336733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 03:19:02.195527 containerd[1494]: time="2024-10-09T03:19:02.195349627Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 03:19:02.195527 containerd[1494]: time="2024-10-09T03:19:02.195442873Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 03:19:02.195527 containerd[1494]: time="2024-10-09T03:19:02.195495671Z" level=info msg="metadata content store policy set" policy=shared Oct 9 03:19:02.199549 containerd[1494]: time="2024-10-09T03:19:02.199519954Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 03:19:02.199587 containerd[1494]: time="2024-10-09T03:19:02.199575468Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 03:19:02.199619 containerd[1494]: time="2024-10-09T03:19:02.199601076Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 03:19:02.199749 containerd[1494]: time="2024-10-09T03:19:02.199725510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 03:19:02.199770 containerd[1494]: time="2024-10-09T03:19:02.199758452Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 03:19:02.199965 containerd[1494]: time="2024-10-09T03:19:02.199946054Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 03:19:02.200148 containerd[1494]: time="2024-10-09T03:19:02.200127214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 03:19:02.200255 containerd[1494]: time="2024-10-09T03:19:02.200236198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 03:19:02.200280 containerd[1494]: time="2024-10-09T03:19:02.200257207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 03:19:02.200280 containerd[1494]: time="2024-10-09T03:19:02.200271044Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 03:19:02.200320 containerd[1494]: time="2024-10-09T03:19:02.200282214Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200320 containerd[1494]: time="2024-10-09T03:19:02.200292885Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200320 containerd[1494]: time="2024-10-09T03:19:02.200303044Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200320 containerd[1494]: time="2024-10-09T03:19:02.200315337Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200377 containerd[1494]: time="2024-10-09T03:19:02.200327299Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200377 containerd[1494]: time="2024-10-09T03:19:02.200337709Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200377 containerd[1494]: time="2024-10-09T03:19:02.200348138Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200377 containerd[1494]: time="2024-10-09T03:19:02.200357355Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 03:19:02.200377 containerd[1494]: time="2024-10-09T03:19:02.200375299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200387432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200398082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200410666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200421145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200431615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200450 containerd[1494]: time="2024-10-09T03:19:02.200441904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200452093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200462112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200473493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200483643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200492820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200502017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200513488Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200536361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200548 containerd[1494]: time="2024-10-09T03:19:02.200548274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200556970Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200598178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200618857Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200633364Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200650797Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 03:19:02.200678 containerd[1494]: time="2024-10-09T03:19:02.200665564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.200764 containerd[1494]: time="2024-10-09T03:19:02.200683006Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 03:19:02.200764 containerd[1494]: time="2024-10-09T03:19:02.200696603Z" level=info msg="NRI interface is disabled by configuration." Oct 9 03:19:02.200764 containerd[1494]: time="2024-10-09T03:19:02.200710519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 03:19:02.201019 containerd[1494]: time="2024-10-09T03:19:02.200976918Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 03:19:02.201156 containerd[1494]: time="2024-10-09T03:19:02.201027002Z" level=info msg="Connect containerd service" Oct 9 03:19:02.201156 containerd[1494]: time="2024-10-09T03:19:02.201069492Z" level=info msg="using legacy CRI server" Oct 9 03:19:02.201156 containerd[1494]: time="2024-10-09T03:19:02.201081434Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 03:19:02.201210 containerd[1494]: time="2024-10-09T03:19:02.201169500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 03:19:02.204909 containerd[1494]: time="2024-10-09T03:19:02.204871407Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 03:19:02.205059 containerd[1494]: time="2024-10-09T03:19:02.205021048Z" level=info msg="Start subscribing containerd event" Oct 9 03:19:02.205107 containerd[1494]: time="2024-10-09T03:19:02.205076552Z" level=info msg="Start recovering state" Oct 9 03:19:02.205168 containerd[1494]: time="2024-10-09T03:19:02.205147746Z" level=info msg="Start event monitor" Oct 9 03:19:02.205199 containerd[1494]: time="2024-10-09T03:19:02.205175218Z" level=info msg="Start snapshots syncer" Oct 9 03:19:02.205199 containerd[1494]: time="2024-10-09T03:19:02.205185307Z" level=info msg="Start cni network conf syncer for default" Oct 9 03:19:02.205199 containerd[1494]: time="2024-10-09T03:19:02.205195065Z" level=info msg="Start streaming server" Oct 9 03:19:02.206194 containerd[1494]: time="2024-10-09T03:19:02.206170075Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 03:19:02.206253 containerd[1494]: time="2024-10-09T03:19:02.206235267Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 03:19:02.206896 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 03:19:02.218419 containerd[1494]: time="2024-10-09T03:19:02.217785283Z" level=info msg="containerd successfully booted in 0.091549s" Oct 9 03:19:02.257121 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 03:19:02.278768 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 03:19:02.291210 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 03:19:02.299737 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 03:19:02.299988 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 03:19:02.308078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 03:19:02.323941 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 03:19:02.332208 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 03:19:02.334784 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 03:19:02.337989 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 03:19:02.434678 tar[1485]: linux-amd64/LICENSE Oct 9 03:19:02.434770 tar[1485]: linux-amd64/README.md Oct 9 03:19:02.445215 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 03:19:02.746125 systemd-networkd[1381]: eth0: Gained IPv6LL Oct 9 03:19:02.746768 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 9 03:19:02.749203 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 03:19:02.750770 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 03:19:02.759142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:02.763599 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 03:19:02.788766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 03:19:03.002114 systemd-networkd[1381]: eth1: Gained IPv6LL Oct 9 03:19:03.002619 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 9 03:19:03.410606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:03.411960 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 03:19:03.414992 systemd[1]: Startup finished in 1.247s (kernel) + 6.468s (initrd) + 4.271s (userspace) = 11.986s. Oct 9 03:19:03.417313 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:03.982975 kubelet[1592]: E1009 03:19:03.982849 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:03.986969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:03.987162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:14.237671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 03:19:14.243281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:14.356829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:14.360737 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:14.402384 kubelet[1612]: E1009 03:19:14.402324 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:14.409142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:14.409327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:24.659732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 03:19:24.665230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:24.789584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:24.793326 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:24.835770 kubelet[1629]: E1009 03:19:24.835702 1629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:24.840602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:24.840810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:34.395839 systemd-timesyncd[1407]: Contacted time server 85.215.93.134:123 (2.flatcar.pool.ntp.org). Oct 9 03:19:34.395907 systemd-timesyncd[1407]: Initial clock synchronization to Wed 2024-10-09 03:19:34.395608 UTC. Oct 9 03:19:34.396327 systemd-resolved[1382]: Clock change detected. Flushing caches. Oct 9 03:19:36.326360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 03:19:36.331617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:36.465061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:36.468833 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:36.516990 kubelet[1646]: E1009 03:19:36.516912 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:36.520976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:36.521171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:46.576243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 03:19:46.582615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:46.699536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:46.709716 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:46.748937 kubelet[1662]: E1009 03:19:46.748874 1662 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:46.753117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:46.753340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:48.718947 update_engine[1477]: I20241009 03:19:48.718854 1477 update_attempter.cc:509] Updating boot flags... Oct 9 03:19:48.767503 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1679) Oct 9 03:19:48.815795 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1682) Oct 9 03:19:48.860461 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1682) Oct 9 03:19:56.826183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 03:19:56.832660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:19:56.958514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:19:56.962376 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:19:57.012150 kubelet[1699]: E1009 03:19:57.012068 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:19:57.017259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:19:57.017561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:19:58.402726 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 03:19:58.409743 systemd[1]: Started sshd@0-188.245.48.63:22-139.178.68.195:35922.service - OpenSSH per-connection server daemon (139.178.68.195:35922). Oct 9 03:19:59.399569 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 35922 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:19:59.401921 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:19:59.409764 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 03:19:59.415639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 03:19:59.417630 systemd-logind[1475]: New session 1 of user core. Oct 9 03:19:59.430425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 03:19:59.436698 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 03:19:59.451819 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 03:19:59.555473 systemd[1713]: Queued start job for default target default.target. Oct 9 03:19:59.566678 systemd[1713]: Created slice app.slice - User Application Slice. Oct 9 03:19:59.566707 systemd[1713]: Reached target paths.target - Paths. Oct 9 03:19:59.566720 systemd[1713]: Reached target timers.target - Timers. Oct 9 03:19:59.568180 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 03:19:59.581752 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 03:19:59.581910 systemd[1713]: Reached target sockets.target - Sockets. Oct 9 03:19:59.581930 systemd[1713]: Reached target basic.target - Basic System. Oct 9 03:19:59.581980 systemd[1713]: Reached target default.target - Main User Target. Oct 9 03:19:59.582022 systemd[1713]: Startup finished in 122ms. Oct 9 03:19:59.582178 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 03:19:59.592580 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 03:20:00.290069 systemd[1]: Started sshd@1-188.245.48.63:22-139.178.68.195:35934.service - OpenSSH per-connection server daemon (139.178.68.195:35934). Oct 9 03:20:01.288858 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 35934 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:01.290370 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:01.294706 systemd-logind[1475]: New session 2 of user core. Oct 9 03:20:01.307562 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 03:20:01.986190 sshd[1724]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:01.990063 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Oct 9 03:20:01.990330 systemd[1]: sshd@1-188.245.48.63:22-139.178.68.195:35934.service: Deactivated successfully. Oct 9 03:20:01.991902 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 03:20:01.994258 systemd-logind[1475]: Removed session 2. Oct 9 03:20:02.156707 systemd[1]: Started sshd@2-188.245.48.63:22-139.178.68.195:54688.service - OpenSSH per-connection server daemon (139.178.68.195:54688). Oct 9 03:20:03.146709 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 54688 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:03.148506 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:03.153948 systemd-logind[1475]: New session 3 of user core. Oct 9 03:20:03.160600 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 03:20:03.834226 sshd[1731]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:03.838714 systemd[1]: sshd@2-188.245.48.63:22-139.178.68.195:54688.service: Deactivated successfully. Oct 9 03:20:03.841032 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 03:20:03.842321 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Oct 9 03:20:03.843841 systemd-logind[1475]: Removed session 3. Oct 9 03:20:04.008746 systemd[1]: Started sshd@3-188.245.48.63:22-139.178.68.195:54702.service - OpenSSH per-connection server daemon (139.178.68.195:54702). Oct 9 03:20:04.999133 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 54702 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:05.000715 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:05.005028 systemd-logind[1475]: New session 4 of user core. Oct 9 03:20:05.013584 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 03:20:05.692424 sshd[1738]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:05.695464 systemd[1]: sshd@3-188.245.48.63:22-139.178.68.195:54702.service: Deactivated successfully. Oct 9 03:20:05.697944 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 03:20:05.698824 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Oct 9 03:20:05.699885 systemd-logind[1475]: Removed session 4. Oct 9 03:20:05.866027 systemd[1]: Started sshd@4-188.245.48.63:22-139.178.68.195:54712.service - OpenSSH per-connection server daemon (139.178.68.195:54712). Oct 9 03:20:06.877673 sshd[1745]: Accepted publickey for core from 139.178.68.195 port 54712 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:06.879070 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:06.883137 systemd-logind[1475]: New session 5 of user core. Oct 9 03:20:06.891585 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 03:20:07.076205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 9 03:20:07.081650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:07.209626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:07.210003 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:20:07.255767 kubelet[1756]: E1009 03:20:07.255642 1756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:20:07.258631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:20:07.258833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:20:07.418912 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 03:20:07.419306 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 03:20:07.438059 sudo[1765]: pam_unix(sudo:session): session closed for user root Oct 9 03:20:07.600293 sshd[1745]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:07.605043 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Oct 9 03:20:07.605522 systemd[1]: sshd@4-188.245.48.63:22-139.178.68.195:54712.service: Deactivated successfully. Oct 9 03:20:07.607934 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 03:20:07.608899 systemd-logind[1475]: Removed session 5. Oct 9 03:20:07.779756 systemd[1]: Started sshd@5-188.245.48.63:22-139.178.68.195:54722.service - OpenSSH per-connection server daemon (139.178.68.195:54722). Oct 9 03:20:08.768838 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 54722 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:08.770413 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:08.775408 systemd-logind[1475]: New session 6 of user core. Oct 9 03:20:08.790624 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 03:20:09.302093 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 03:20:09.302418 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 03:20:09.306355 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 9 03:20:09.312141 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 03:20:09.312536 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 03:20:09.330816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 03:20:09.361467 augenrules[1796]: No rules Oct 9 03:20:09.362516 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 03:20:09.362756 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 03:20:09.364248 sudo[1773]: pam_unix(sudo:session): session closed for user root Oct 9 03:20:09.526238 sshd[1770]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:09.529895 systemd[1]: sshd@5-188.245.48.63:22-139.178.68.195:54722.service: Deactivated successfully. Oct 9 03:20:09.532334 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 03:20:09.534162 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Oct 9 03:20:09.535734 systemd-logind[1475]: Removed session 6. Oct 9 03:20:09.705865 systemd[1]: Started sshd@6-188.245.48.63:22-139.178.68.195:54734.service - OpenSSH per-connection server daemon (139.178.68.195:54734). Oct 9 03:20:10.708391 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 54734 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:20:10.709998 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:20:10.714857 systemd-logind[1475]: New session 7 of user core. Oct 9 03:20:10.724591 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 03:20:11.249010 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 03:20:11.249424 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 03:20:11.475727 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 03:20:11.485896 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 03:20:11.699760 dockerd[1825]: time="2024-10-09T03:20:11.699694920Z" level=info msg="Starting up" Oct 9 03:20:11.758511 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3266513652-merged.mount: Deactivated successfully. Oct 9 03:20:11.790472 dockerd[1825]: time="2024-10-09T03:20:11.790279991Z" level=info msg="Loading containers: start." Oct 9 03:20:11.941508 kernel: Initializing XFRM netlink socket Oct 9 03:20:12.030255 systemd-networkd[1381]: docker0: Link UP Oct 9 03:20:12.059738 dockerd[1825]: time="2024-10-09T03:20:12.059680071Z" level=info msg="Loading containers: done." Oct 9 03:20:12.076582 dockerd[1825]: time="2024-10-09T03:20:12.076527321Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 03:20:12.076796 dockerd[1825]: time="2024-10-09T03:20:12.076611354Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 03:20:12.076796 dockerd[1825]: time="2024-10-09T03:20:12.076719698Z" level=info msg="Daemon has completed initialization" Oct 9 03:20:12.115292 dockerd[1825]: time="2024-10-09T03:20:12.115222352Z" level=info msg="API listen on /run/docker.sock" Oct 9 03:20:12.115765 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 03:20:13.005450 containerd[1494]: time="2024-10-09T03:20:13.005236811Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 03:20:13.588836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453423541.mount: Deactivated successfully. Oct 9 03:20:14.776893 containerd[1494]: time="2024-10-09T03:20:14.775962669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:14.776893 containerd[1494]: time="2024-10-09T03:20:14.776846095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213933" Oct 9 03:20:14.777501 containerd[1494]: time="2024-10-09T03:20:14.777479518Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:14.780368 containerd[1494]: time="2024-10-09T03:20:14.780314310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:14.781361 containerd[1494]: time="2024-10-09T03:20:14.781335447Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 1.776065619s" Oct 9 03:20:14.781474 containerd[1494]: time="2024-10-09T03:20:14.781446704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 03:20:14.804497 containerd[1494]: time="2024-10-09T03:20:14.804416818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 03:20:16.249727 containerd[1494]: time="2024-10-09T03:20:16.248821634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:16.249727 containerd[1494]: time="2024-10-09T03:20:16.249681944Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208693" Oct 9 03:20:16.250302 containerd[1494]: time="2024-10-09T03:20:16.250274012Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:16.252363 containerd[1494]: time="2024-10-09T03:20:16.252318688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:16.253384 containerd[1494]: time="2024-10-09T03:20:16.253357771Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.448883986s" Oct 9 03:20:16.253483 containerd[1494]: time="2024-10-09T03:20:16.253468004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 03:20:16.275637 containerd[1494]: time="2024-10-09T03:20:16.275619859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 03:20:17.210029 containerd[1494]: time="2024-10-09T03:20:17.209971122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:17.210982 containerd[1494]: time="2024-10-09T03:20:17.210936086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320476" Oct 9 03:20:17.211792 containerd[1494]: time="2024-10-09T03:20:17.211730626Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:17.214050 containerd[1494]: time="2024-10-09T03:20:17.214006500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:17.215292 containerd[1494]: time="2024-10-09T03:20:17.214927976Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 939.213926ms" Oct 9 03:20:17.215292 containerd[1494]: time="2024-10-09T03:20:17.214958497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 03:20:17.236622 containerd[1494]: time="2024-10-09T03:20:17.236573518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 03:20:17.326241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 9 03:20:17.332972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:17.456337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:17.466756 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 03:20:17.511702 kubelet[2101]: E1009 03:20:17.511609 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 03:20:17.515206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 03:20:17.515468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 03:20:18.200757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667189861.mount: Deactivated successfully. Oct 9 03:20:18.454777 containerd[1494]: time="2024-10-09T03:20:18.454660255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:18.455584 containerd[1494]: time="2024-10-09T03:20:18.455549810Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601776" Oct 9 03:20:18.456313 containerd[1494]: time="2024-10-09T03:20:18.456264443Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:18.457939 containerd[1494]: time="2024-10-09T03:20:18.457920734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:18.459466 containerd[1494]: time="2024-10-09T03:20:18.458498003Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.221888451s" Oct 9 03:20:18.459466 containerd[1494]: time="2024-10-09T03:20:18.458524405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 03:20:18.477530 containerd[1494]: time="2024-10-09T03:20:18.477489868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 03:20:18.996507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568993248.mount: Deactivated successfully. Oct 9 03:20:19.825486 containerd[1494]: time="2024-10-09T03:20:19.825414325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:19.826700 containerd[1494]: time="2024-10-09T03:20:19.826659844Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 9 03:20:19.829735 containerd[1494]: time="2024-10-09T03:20:19.829686649Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:19.835979 containerd[1494]: time="2024-10-09T03:20:19.834076326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:19.835979 containerd[1494]: time="2024-10-09T03:20:19.834989721Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.357462568s" Oct 9 03:20:19.835979 containerd[1494]: time="2024-10-09T03:20:19.835010603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 03:20:19.860481 containerd[1494]: time="2024-10-09T03:20:19.860456295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 03:20:20.388019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657450889.mount: Deactivated successfully. Oct 9 03:20:20.393386 containerd[1494]: time="2024-10-09T03:20:20.393305284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:20.394080 containerd[1494]: time="2024-10-09T03:20:20.394035206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 9 03:20:20.395000 containerd[1494]: time="2024-10-09T03:20:20.394961759Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:20.397983 containerd[1494]: time="2024-10-09T03:20:20.397090495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:20.397983 containerd[1494]: time="2024-10-09T03:20:20.397875516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 537.25304ms" Oct 9 03:20:20.397983 containerd[1494]: time="2024-10-09T03:20:20.397899524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 03:20:20.425511 containerd[1494]: time="2024-10-09T03:20:20.425483842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 03:20:20.957176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973794487.mount: Deactivated successfully. Oct 9 03:20:22.396312 containerd[1494]: time="2024-10-09T03:20:22.396236477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:22.397295 containerd[1494]: time="2024-10-09T03:20:22.397254269Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Oct 9 03:20:22.397929 containerd[1494]: time="2024-10-09T03:20:22.397885586Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:22.402485 containerd[1494]: time="2024-10-09T03:20:22.400260569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:22.402485 containerd[1494]: time="2024-10-09T03:20:22.401224484Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.975579521s" Oct 9 03:20:22.402485 containerd[1494]: time="2024-10-09T03:20:22.401250195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 03:20:25.079405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:25.085691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:25.109048 systemd[1]: Reloading requested from client PID 2292 ('systemctl') (unit session-7.scope)... Oct 9 03:20:25.109068 systemd[1]: Reloading... Oct 9 03:20:25.244462 zram_generator::config[2338]: No configuration found. Oct 9 03:20:25.332550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 03:20:25.397131 systemd[1]: Reloading finished in 287 ms. Oct 9 03:20:25.444272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 03:20:25.444358 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 03:20:25.444757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:25.447026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:25.576166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:25.580862 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 03:20:25.623299 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 03:20:25.623299 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 03:20:25.623299 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 03:20:25.624262 kubelet[2385]: I1009 03:20:25.624216 2385 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 03:20:25.828469 kubelet[2385]: I1009 03:20:25.828418 2385 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 03:20:25.828469 kubelet[2385]: I1009 03:20:25.828464 2385 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 03:20:25.828698 kubelet[2385]: I1009 03:20:25.828676 2385 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 03:20:25.849483 kubelet[2385]: E1009 03:20:25.849456 2385 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.48.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.852516 kubelet[2385]: I1009 03:20:25.852215 2385 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 03:20:25.868544 kubelet[2385]: I1009 03:20:25.868521 2385 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 03:20:25.870054 kubelet[2385]: I1009 03:20:25.870023 2385 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 03:20:25.871019 kubelet[2385]: I1009 03:20:25.870992 2385 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 03:20:25.871565 kubelet[2385]: I1009 03:20:25.871538 2385 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 03:20:25.871608 kubelet[2385]: I1009 03:20:25.871565 2385 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 03:20:25.871713 kubelet[2385]: I1009 03:20:25.871686 2385 state_mem.go:36] "Initialized new in-memory state store" Oct 9 03:20:25.873490 kubelet[2385]: I1009 03:20:25.871796 2385 kubelet.go:396] "Attempting to sync node with API server" Oct 9 03:20:25.873490 kubelet[2385]: I1009 03:20:25.871818 2385 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 03:20:25.873490 kubelet[2385]: I1009 03:20:25.871846 2385 kubelet.go:312] "Adding apiserver pod source" Oct 9 03:20:25.873490 kubelet[2385]: I1009 03:20:25.871864 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 03:20:25.873490 kubelet[2385]: W1009 03:20:25.872064 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.48.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-d-cd8c2d08d9&limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.873490 kubelet[2385]: E1009 03:20:25.872099 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.48.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-d-cd8c2d08d9&limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.873490 kubelet[2385]: W1009 03:20:25.872951 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.48.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.873490 kubelet[2385]: E1009 03:20:25.872995 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.48.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.873490 kubelet[2385]: I1009 03:20:25.873353 2385 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 03:20:25.877819 kubelet[2385]: I1009 03:20:25.877797 2385 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 03:20:25.878995 kubelet[2385]: W1009 03:20:25.878972 2385 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 03:20:25.879701 kubelet[2385]: I1009 03:20:25.879617 2385 server.go:1256] "Started kubelet" Oct 9 03:20:25.881306 kubelet[2385]: I1009 03:20:25.881029 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 03:20:25.885460 kubelet[2385]: E1009 03:20:25.883245 2385 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.48.63:6443/api/v1/namespaces/default/events\": dial tcp 188.245.48.63:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116-0-0-d-cd8c2d08d9.17fcaaaf10b3090b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-d-cd8c2d08d9,UID:ci-4116-0-0-d-cd8c2d08d9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-d-cd8c2d08d9,},FirstTimestamp:2024-10-09 03:20:25.879595275 +0000 UTC m=+0.294717952,LastTimestamp:2024-10-09 03:20:25.879595275 +0000 UTC m=+0.294717952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-d-cd8c2d08d9,}" Oct 9 03:20:25.885460 kubelet[2385]: I1009 03:20:25.883279 2385 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 03:20:25.885460 kubelet[2385]: I1009 03:20:25.883871 2385 server.go:461] "Adding debug handlers to kubelet server" Oct 9 03:20:25.885460 kubelet[2385]: I1009 03:20:25.884545 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 03:20:25.885460 kubelet[2385]: I1009 03:20:25.884693 2385 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 03:20:25.890141 kubelet[2385]: I1009 03:20:25.890112 2385 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 03:20:25.891919 kubelet[2385]: I1009 03:20:25.891906 2385 factory.go:221] Registration of the systemd container factory successfully Oct 9 03:20:25.892044 kubelet[2385]: I1009 03:20:25.892029 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 03:20:25.892528 kubelet[2385]: E1009 03:20:25.892512 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.48.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-d-cd8c2d08d9?timeout=10s\": dial tcp 188.245.48.63:6443: connect: connection refused" interval="200ms" Oct 9 03:20:25.893710 kubelet[2385]: I1009 03:20:25.893695 2385 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 03:20:25.893853 kubelet[2385]: I1009 03:20:25.893842 2385 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 03:20:25.894477 kubelet[2385]: I1009 03:20:25.894463 2385 factory.go:221] Registration of the containerd container factory successfully Oct 9 03:20:25.904172 kubelet[2385]: I1009 03:20:25.904149 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 03:20:25.905692 kubelet[2385]: I1009 03:20:25.905671 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 03:20:25.905727 kubelet[2385]: I1009 03:20:25.905701 2385 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 03:20:25.905727 kubelet[2385]: I1009 03:20:25.905719 2385 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 03:20:25.905776 kubelet[2385]: E1009 03:20:25.905766 2385 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 03:20:25.914979 kubelet[2385]: W1009 03:20:25.914952 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.48.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.915045 kubelet[2385]: E1009 03:20:25.915036 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.48.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.915288 kubelet[2385]: E1009 03:20:25.915277 2385 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 03:20:25.915486 kubelet[2385]: W1009 03:20:25.915459 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.48.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.915553 kubelet[2385]: E1009 03:20:25.915543 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.48.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:25.924048 kubelet[2385]: I1009 03:20:25.924026 2385 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 03:20:25.924048 kubelet[2385]: I1009 03:20:25.924040 2385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 03:20:25.924130 kubelet[2385]: I1009 03:20:25.924053 2385 state_mem.go:36] "Initialized new in-memory state store" Oct 9 03:20:25.925723 kubelet[2385]: I1009 03:20:25.925694 2385 policy_none.go:49] "None policy: Start" Oct 9 03:20:25.926265 kubelet[2385]: I1009 03:20:25.926243 2385 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 03:20:25.926265 kubelet[2385]: I1009 03:20:25.926264 2385 state_mem.go:35] "Initializing new in-memory state store" Oct 9 03:20:25.934171 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 03:20:25.943166 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 03:20:25.945940 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 03:20:25.956409 kubelet[2385]: I1009 03:20:25.956144 2385 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 03:20:25.956409 kubelet[2385]: I1009 03:20:25.956338 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 03:20:25.957738 kubelet[2385]: E1009 03:20:25.957726 2385 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116-0-0-d-cd8c2d08d9\" not found" Oct 9 03:20:25.992494 kubelet[2385]: I1009 03:20:25.992469 2385 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:25.992780 kubelet[2385]: E1009 03:20:25.992741 2385 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.48.63:6443/api/v1/nodes\": dial tcp 188.245.48.63:6443: connect: connection refused" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.006179 kubelet[2385]: I1009 03:20:26.006154 2385 topology_manager.go:215] "Topology Admit Handler" podUID="bc7282938b0396fbde40e88ad9bc4683" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.007542 kubelet[2385]: I1009 03:20:26.007415 2385 topology_manager.go:215] "Topology Admit Handler" podUID="b05493834fbe67b2bfdd6d20e2795704" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.008858 kubelet[2385]: I1009 03:20:26.008827 2385 topology_manager.go:215] "Topology Admit Handler" podUID="9a82d8b3b26d51ff2d3b8d0062266fd3" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.015029 systemd[1]: Created slice kubepods-burstable-podbc7282938b0396fbde40e88ad9bc4683.slice - libcontainer container kubepods-burstable-podbc7282938b0396fbde40e88ad9bc4683.slice. Oct 9 03:20:26.030193 systemd[1]: Created slice kubepods-burstable-podb05493834fbe67b2bfdd6d20e2795704.slice - libcontainer container kubepods-burstable-podb05493834fbe67b2bfdd6d20e2795704.slice. Oct 9 03:20:26.041482 systemd[1]: Created slice kubepods-burstable-pod9a82d8b3b26d51ff2d3b8d0062266fd3.slice - libcontainer container kubepods-burstable-pod9a82d8b3b26d51ff2d3b8d0062266fd3.slice. Oct 9 03:20:26.093559 kubelet[2385]: E1009 03:20:26.093499 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.48.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-d-cd8c2d08d9?timeout=10s\": dial tcp 188.245.48.63:6443: connect: connection refused" interval="400ms" Oct 9 03:20:26.095834 kubelet[2385]: I1009 03:20:26.095803 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a82d8b3b26d51ff2d3b8d0062266fd3-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"9a82d8b3b26d51ff2d3b8d0062266fd3\") " pod="kube-system/kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.095917 kubelet[2385]: I1009 03:20:26.095883 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.095917 kubelet[2385]: I1009 03:20:26.095912 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.095972 kubelet[2385]: I1009 03:20:26.095934 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.095972 kubelet[2385]: I1009 03:20:26.095959 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.096026 kubelet[2385]: I1009 03:20:26.095979 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.096026 kubelet[2385]: I1009 03:20:26.096003 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.096071 kubelet[2385]: I1009 03:20:26.096028 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.096071 kubelet[2385]: I1009 03:20:26.096049 2385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.194397 kubelet[2385]: I1009 03:20:26.194314 2385 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.195188 kubelet[2385]: E1009 03:20:26.195169 2385 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.48.63:6443/api/v1/nodes\": dial tcp 188.245.48.63:6443: connect: connection refused" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.328473 containerd[1494]: time="2024-10-09T03:20:26.328325515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-d-cd8c2d08d9,Uid:bc7282938b0396fbde40e88ad9bc4683,Namespace:kube-system,Attempt:0,}" Oct 9 03:20:26.340188 containerd[1494]: time="2024-10-09T03:20:26.340154983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9,Uid:b05493834fbe67b2bfdd6d20e2795704,Namespace:kube-system,Attempt:0,}" Oct 9 03:20:26.343946 containerd[1494]: time="2024-10-09T03:20:26.343909017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-d-cd8c2d08d9,Uid:9a82d8b3b26d51ff2d3b8d0062266fd3,Namespace:kube-system,Attempt:0,}" Oct 9 03:20:26.494497 kubelet[2385]: E1009 03:20:26.494361 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.48.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-d-cd8c2d08d9?timeout=10s\": dial tcp 188.245.48.63:6443: connect: connection refused" interval="800ms" Oct 9 03:20:26.597125 kubelet[2385]: I1009 03:20:26.597061 2385 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.597340 kubelet[2385]: E1009 03:20:26.597325 2385 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.48.63:6443/api/v1/nodes\": dial tcp 188.245.48.63:6443: connect: connection refused" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:26.737325 kubelet[2385]: W1009 03:20:26.737214 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.48.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-d-cd8c2d08d9&limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:26.737325 kubelet[2385]: E1009 03:20:26.737298 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.48.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-d-cd8c2d08d9&limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:26.850670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422849921.mount: Deactivated successfully. Oct 9 03:20:26.856977 containerd[1494]: time="2024-10-09T03:20:26.856934913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 03:20:26.861715 containerd[1494]: time="2024-10-09T03:20:26.861655724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 03:20:26.862469 containerd[1494]: time="2024-10-09T03:20:26.862342215Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 03:20:26.867456 containerd[1494]: time="2024-10-09T03:20:26.865917951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 9 03:20:26.867456 containerd[1494]: time="2024-10-09T03:20:26.865989991Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 03:20:26.867456 containerd[1494]: time="2024-10-09T03:20:26.866054407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 03:20:26.868119 containerd[1494]: time="2024-10-09T03:20:26.868086206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.85246ms" Oct 9 03:20:26.868942 containerd[1494]: time="2024-10-09T03:20:26.868904315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 03:20:26.870745 containerd[1494]: time="2024-10-09T03:20:26.870592352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 03:20:26.870955 containerd[1494]: time="2024-10-09T03:20:26.870924862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.957341ms" Oct 9 03:20:26.872120 containerd[1494]: time="2024-10-09T03:20:26.872087963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.617666ms" Oct 9 03:20:26.892892 kubelet[2385]: W1009 03:20:26.892680 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.48.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:26.892892 kubelet[2385]: E1009 03:20:26.892744 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.48.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:26.974099 containerd[1494]: time="2024-10-09T03:20:26.971652924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:26.974099 containerd[1494]: time="2024-10-09T03:20:26.973587654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:26.974099 containerd[1494]: time="2024-10-09T03:20:26.973599727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:26.974099 containerd[1494]: time="2024-10-09T03:20:26.973671136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:26.981058 containerd[1494]: time="2024-10-09T03:20:26.980874558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:26.981058 containerd[1494]: time="2024-10-09T03:20:26.980989533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:26.981169 containerd[1494]: time="2024-10-09T03:20:26.981073017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:26.981353 containerd[1494]: time="2024-10-09T03:20:26.981233359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:26.985581 containerd[1494]: time="2024-10-09T03:20:26.985404679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:26.985581 containerd[1494]: time="2024-10-09T03:20:26.985488672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:26.985581 containerd[1494]: time="2024-10-09T03:20:26.985510394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:26.988264 containerd[1494]: time="2024-10-09T03:20:26.987672137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:27.019847 systemd[1]: Started cri-containerd-fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8.scope - libcontainer container fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8. Oct 9 03:20:27.023856 systemd[1]: Started cri-containerd-2b5b87972a0873b8dc953873e413ddf0a231563801e253aac603faef0781e3e9.scope - libcontainer container 2b5b87972a0873b8dc953873e413ddf0a231563801e253aac603faef0781e3e9. Oct 9 03:20:27.025637 systemd[1]: Started cri-containerd-f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0.scope - libcontainer container f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0. Oct 9 03:20:27.073624 containerd[1494]: time="2024-10-09T03:20:27.073579842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-d-cd8c2d08d9,Uid:9a82d8b3b26d51ff2d3b8d0062266fd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8\"" Oct 9 03:20:27.079203 containerd[1494]: time="2024-10-09T03:20:27.077862437Z" level=info msg="CreateContainer within sandbox \"fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 03:20:27.085576 containerd[1494]: time="2024-10-09T03:20:27.085536892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-d-cd8c2d08d9,Uid:bc7282938b0396fbde40e88ad9bc4683,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b5b87972a0873b8dc953873e413ddf0a231563801e253aac603faef0781e3e9\"" Oct 9 03:20:27.089024 containerd[1494]: time="2024-10-09T03:20:27.088993449Z" level=info msg="CreateContainer within sandbox \"2b5b87972a0873b8dc953873e413ddf0a231563801e253aac603faef0781e3e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 03:20:27.096983 containerd[1494]: time="2024-10-09T03:20:27.096942870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9,Uid:b05493834fbe67b2bfdd6d20e2795704,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0\"" Oct 9 03:20:27.099585 containerd[1494]: time="2024-10-09T03:20:27.099546153Z" level=info msg="CreateContainer within sandbox \"f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 03:20:27.100000 containerd[1494]: time="2024-10-09T03:20:27.099678401Z" level=info msg="CreateContainer within sandbox \"fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2\"" Oct 9 03:20:27.101090 containerd[1494]: time="2024-10-09T03:20:27.100818783Z" level=info msg="StartContainer for \"3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2\"" Oct 9 03:20:27.106070 containerd[1494]: time="2024-10-09T03:20:27.106024808Z" level=info msg="CreateContainer within sandbox \"2b5b87972a0873b8dc953873e413ddf0a231563801e253aac603faef0781e3e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1c5c9b1ab15fe6bb7ab012d5d22724f7f2398e321e9d0a93bc4cd0cf22c12fd\"" Oct 9 03:20:27.106748 containerd[1494]: time="2024-10-09T03:20:27.106532407Z" level=info msg="StartContainer for \"b1c5c9b1ab15fe6bb7ab012d5d22724f7f2398e321e9d0a93bc4cd0cf22c12fd\"" Oct 9 03:20:27.114526 kubelet[2385]: W1009 03:20:27.114354 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.48.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:27.114526 kubelet[2385]: E1009 03:20:27.114510 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.48.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:27.118241 containerd[1494]: time="2024-10-09T03:20:27.118031775Z" level=info msg="CreateContainer within sandbox \"f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e\"" Oct 9 03:20:27.118534 containerd[1494]: time="2024-10-09T03:20:27.118467084Z" level=info msg="StartContainer for \"73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e\"" Oct 9 03:20:27.135590 systemd[1]: Started cri-containerd-3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2.scope - libcontainer container 3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2. Oct 9 03:20:27.140161 systemd[1]: Started cri-containerd-b1c5c9b1ab15fe6bb7ab012d5d22724f7f2398e321e9d0a93bc4cd0cf22c12fd.scope - libcontainer container b1c5c9b1ab15fe6bb7ab012d5d22724f7f2398e321e9d0a93bc4cd0cf22c12fd. Oct 9 03:20:27.153589 systemd[1]: Started cri-containerd-73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e.scope - libcontainer container 73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e. Oct 9 03:20:27.193941 containerd[1494]: time="2024-10-09T03:20:27.193912395Z" level=info msg="StartContainer for \"b1c5c9b1ab15fe6bb7ab012d5d22724f7f2398e321e9d0a93bc4cd0cf22c12fd\" returns successfully" Oct 9 03:20:27.214478 containerd[1494]: time="2024-10-09T03:20:27.213717409Z" level=info msg="StartContainer for \"73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e\" returns successfully" Oct 9 03:20:27.214478 containerd[1494]: time="2024-10-09T03:20:27.213788137Z" level=info msg="StartContainer for \"3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2\" returns successfully" Oct 9 03:20:27.295101 kubelet[2385]: E1009 03:20:27.295050 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.48.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-d-cd8c2d08d9?timeout=10s\": dial tcp 188.245.48.63:6443: connect: connection refused" interval="1.6s" Oct 9 03:20:27.320034 kubelet[2385]: W1009 03:20:27.319984 2385 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.48.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:27.320134 kubelet[2385]: E1009 03:20:27.320037 2385 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.48.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.48.63:6443: connect: connection refused Oct 9 03:20:27.399462 kubelet[2385]: I1009 03:20:27.399358 2385 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:27.400091 kubelet[2385]: E1009 03:20:27.400069 2385 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.48.63:6443/api/v1/nodes\": dial tcp 188.245.48.63:6443: connect: connection refused" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:28.795749 kubelet[2385]: E1009 03:20:28.795704 2385 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4116-0-0-d-cd8c2d08d9" not found Oct 9 03:20:28.874783 kubelet[2385]: I1009 03:20:28.874731 2385 apiserver.go:52] "Watching apiserver" Oct 9 03:20:28.894808 kubelet[2385]: I1009 03:20:28.894779 2385 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 03:20:28.898252 kubelet[2385]: E1009 03:20:28.898225 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116-0-0-d-cd8c2d08d9\" not found" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:29.003256 kubelet[2385]: I1009 03:20:29.003184 2385 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:29.010124 kubelet[2385]: I1009 03:20:29.010092 2385 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.177918 systemd[1]: Reloading requested from client PID 2659 ('systemctl') (unit session-7.scope)... Oct 9 03:20:31.177936 systemd[1]: Reloading... Oct 9 03:20:31.282534 zram_generator::config[2702]: No configuration found. Oct 9 03:20:31.374675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 03:20:31.449836 systemd[1]: Reloading finished in 271 ms. Oct 9 03:20:31.489173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:31.508540 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 03:20:31.508780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:31.514738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 03:20:31.632303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 03:20:31.636661 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 03:20:31.687253 kubelet[2750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 03:20:31.687253 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 03:20:31.687253 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 03:20:31.687925 kubelet[2750]: I1009 03:20:31.687270 2750 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 03:20:31.692466 kubelet[2750]: I1009 03:20:31.692278 2750 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 03:20:31.692466 kubelet[2750]: I1009 03:20:31.692295 2750 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 03:20:31.692466 kubelet[2750]: I1009 03:20:31.692457 2750 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 03:20:31.694116 kubelet[2750]: I1009 03:20:31.693741 2750 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 03:20:31.700093 kubelet[2750]: I1009 03:20:31.700037 2750 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.706788 2750 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.706978 2750 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.707096 2750 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.707115 2750 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.707122 2750 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 03:20:31.707452 kubelet[2750]: I1009 03:20:31.707148 2750 state_mem.go:36] "Initialized new in-memory state store" Oct 9 03:20:31.707650 kubelet[2750]: I1009 03:20:31.707226 2750 kubelet.go:396] "Attempting to sync node with API server" Oct 9 03:20:31.707650 kubelet[2750]: I1009 03:20:31.707237 2750 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 03:20:31.707650 kubelet[2750]: I1009 03:20:31.707258 2750 kubelet.go:312] "Adding apiserver pod source" Oct 9 03:20:31.707650 kubelet[2750]: I1009 03:20:31.707267 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 03:20:31.710568 kubelet[2750]: I1009 03:20:31.710552 2750 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 03:20:31.710796 kubelet[2750]: I1009 03:20:31.710784 2750 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 03:20:31.712030 kubelet[2750]: I1009 03:20:31.712017 2750 server.go:1256] "Started kubelet" Oct 9 03:20:31.713368 kubelet[2750]: I1009 03:20:31.713354 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 03:20:31.731141 kubelet[2750]: I1009 03:20:31.717362 2750 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 03:20:31.732141 kubelet[2750]: I1009 03:20:31.732126 2750 server.go:461] "Adding debug handlers to kubelet server" Oct 9 03:20:31.732904 kubelet[2750]: I1009 03:20:31.732882 2750 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 03:20:31.732957 kubelet[2750]: I1009 03:20:31.717387 2750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 03:20:31.733216 kubelet[2750]: I1009 03:20:31.733124 2750 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 03:20:31.733216 kubelet[2750]: I1009 03:20:31.733172 2750 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 03:20:31.733328 kubelet[2750]: I1009 03:20:31.733295 2750 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 03:20:31.734765 kubelet[2750]: E1009 03:20:31.734681 2750 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 03:20:31.739522 kubelet[2750]: I1009 03:20:31.738947 2750 factory.go:221] Registration of the containerd container factory successfully Oct 9 03:20:31.739522 kubelet[2750]: I1009 03:20:31.738996 2750 factory.go:221] Registration of the systemd container factory successfully Oct 9 03:20:31.739522 kubelet[2750]: I1009 03:20:31.739071 2750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 03:20:31.756476 kubelet[2750]: I1009 03:20:31.755492 2750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 03:20:31.757928 kubelet[2750]: I1009 03:20:31.757487 2750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 03:20:31.757928 kubelet[2750]: I1009 03:20:31.757509 2750 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 03:20:31.757928 kubelet[2750]: I1009 03:20:31.757523 2750 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 03:20:31.757928 kubelet[2750]: E1009 03:20:31.757589 2750 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790216 2750 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790232 2750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790247 2750 state_mem.go:36] "Initialized new in-memory state store" Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790470 2750 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790491 2750 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 03:20:31.790535 kubelet[2750]: I1009 03:20:31.790498 2750 policy_none.go:49] "None policy: Start" Oct 9 03:20:31.790923 kubelet[2750]: I1009 03:20:31.790903 2750 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 03:20:31.791028 kubelet[2750]: I1009 03:20:31.790997 2750 state_mem.go:35] "Initializing new in-memory state store" Oct 9 03:20:31.791809 kubelet[2750]: I1009 03:20:31.791123 2750 state_mem.go:75] "Updated machine memory state" Oct 9 03:20:31.794940 kubelet[2750]: I1009 03:20:31.794905 2750 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 03:20:31.795524 kubelet[2750]: I1009 03:20:31.795120 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 03:20:31.834734 kubelet[2750]: I1009 03:20:31.834700 2750 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.851419 kubelet[2750]: I1009 03:20:31.851290 2750 kubelet_node_status.go:112] "Node was previously registered" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.851419 kubelet[2750]: I1009 03:20:31.851354 2750 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.858253 kubelet[2750]: I1009 03:20:31.858218 2750 topology_manager.go:215] "Topology Admit Handler" podUID="bc7282938b0396fbde40e88ad9bc4683" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.859091 kubelet[2750]: I1009 03:20:31.858952 2750 topology_manager.go:215] "Topology Admit Handler" podUID="b05493834fbe67b2bfdd6d20e2795704" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.859091 kubelet[2750]: I1009 03:20:31.859015 2750 topology_manager.go:215] "Topology Admit Handler" podUID="9a82d8b3b26d51ff2d3b8d0062266fd3" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934691 kubelet[2750]: I1009 03:20:31.934614 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a82d8b3b26d51ff2d3b8d0062266fd3-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"9a82d8b3b26d51ff2d3b8d0062266fd3\") " pod="kube-system/kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934691 kubelet[2750]: I1009 03:20:31.934654 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934691 kubelet[2750]: I1009 03:20:31.934683 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934691 kubelet[2750]: I1009 03:20:31.934705 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc7282938b0396fbde40e88ad9bc4683-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"bc7282938b0396fbde40e88ad9bc4683\") " pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934886 kubelet[2750]: I1009 03:20:31.934724 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934886 kubelet[2750]: I1009 03:20:31.934741 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934886 kubelet[2750]: I1009 03:20:31.934761 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934886 kubelet[2750]: I1009 03:20:31.934778 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:31.934886 kubelet[2750]: I1009 03:20:31.934798 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b05493834fbe67b2bfdd6d20e2795704-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9\" (UID: \"b05493834fbe67b2bfdd6d20e2795704\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:32.708666 kubelet[2750]: I1009 03:20:32.708465 2750 apiserver.go:52] "Watching apiserver" Oct 9 03:20:32.733594 kubelet[2750]: I1009 03:20:32.733248 2750 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 03:20:32.799637 kubelet[2750]: E1009 03:20:32.799589 2750 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4116-0-0-d-cd8c2d08d9\" already exists" pod="kube-system/kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:32.802206 kubelet[2750]: E1009 03:20:32.802075 2750 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116-0-0-d-cd8c2d08d9\" already exists" pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:20:32.830467 kubelet[2750]: I1009 03:20:32.830413 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116-0-0-d-cd8c2d08d9" podStartSLOduration=1.830364084 podStartE2EDuration="1.830364084s" podCreationTimestamp="2024-10-09 03:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:20:32.829072532 +0000 UTC m=+1.187983272" watchObservedRunningTime="2024-10-09 03:20:32.830364084 +0000 UTC m=+1.189274812" Oct 9 03:20:32.830666 kubelet[2750]: I1009 03:20:32.830537 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116-0-0-d-cd8c2d08d9" podStartSLOduration=1.8305104060000001 podStartE2EDuration="1.830510406s" podCreationTimestamp="2024-10-09 03:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:20:32.820261637 +0000 UTC m=+1.179172415" watchObservedRunningTime="2024-10-09 03:20:32.830510406 +0000 UTC m=+1.189421135" Oct 9 03:20:32.837286 kubelet[2750]: I1009 03:20:32.837256 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116-0-0-d-cd8c2d08d9" podStartSLOduration=1.837232132 podStartE2EDuration="1.837232132s" podCreationTimestamp="2024-10-09 03:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:20:32.836193577 +0000 UTC m=+1.195104306" watchObservedRunningTime="2024-10-09 03:20:32.837232132 +0000 UTC m=+1.196142860" Oct 9 03:20:36.004058 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 9 03:20:36.167190 sshd[1804]: pam_unix(sshd:session): session closed for user core Oct 9 03:20:36.170613 systemd[1]: sshd@6-188.245.48.63:22-139.178.68.195:54734.service: Deactivated successfully. Oct 9 03:20:36.173099 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 03:20:36.173313 systemd[1]: session-7.scope: Consumed 4.221s CPU time, 183.6M memory peak, 0B memory swap peak. Oct 9 03:20:36.175163 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Oct 9 03:20:36.176410 systemd-logind[1475]: Removed session 7. Oct 9 03:20:44.354656 kubelet[2750]: I1009 03:20:44.354307 2750 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 03:20:44.355016 containerd[1494]: time="2024-10-09T03:20:44.354581958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 03:20:44.356047 kubelet[2750]: I1009 03:20:44.356021 2750 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 03:20:44.722494 kubelet[2750]: I1009 03:20:44.721993 2750 topology_manager.go:215] "Topology Admit Handler" podUID="14950002-fa42-4387-94dd-f82c0c4079fb" podNamespace="kube-system" podName="kube-proxy-mgwxc" Oct 9 03:20:44.732410 systemd[1]: Created slice kubepods-besteffort-pod14950002_fa42_4387_94dd_f82c0c4079fb.slice - libcontainer container kubepods-besteffort-pod14950002_fa42_4387_94dd_f82c0c4079fb.slice. Oct 9 03:20:44.817540 kubelet[2750]: I1009 03:20:44.817317 2750 topology_manager.go:215] "Topology Admit Handler" podUID="0f022404-2577-41f4-bfdb-0a3a78551acf" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-5j55w" Oct 9 03:20:44.828380 systemd[1]: Created slice kubepods-besteffort-pod0f022404_2577_41f4_bfdb_0a3a78551acf.slice - libcontainer container kubepods-besteffort-pod0f022404_2577_41f4_bfdb_0a3a78551acf.slice. Oct 9 03:20:44.835228 kubelet[2750]: I1009 03:20:44.835206 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsrh\" (UniqueName: \"kubernetes.io/projected/14950002-fa42-4387-94dd-f82c0c4079fb-kube-api-access-9xsrh\") pod \"kube-proxy-mgwxc\" (UID: \"14950002-fa42-4387-94dd-f82c0c4079fb\") " pod="kube-system/kube-proxy-mgwxc" Oct 9 03:20:44.835404 kubelet[2750]: I1009 03:20:44.835392 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14950002-fa42-4387-94dd-f82c0c4079fb-xtables-lock\") pod \"kube-proxy-mgwxc\" (UID: \"14950002-fa42-4387-94dd-f82c0c4079fb\") " pod="kube-system/kube-proxy-mgwxc" Oct 9 03:20:44.835530 kubelet[2750]: I1009 03:20:44.835519 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14950002-fa42-4387-94dd-f82c0c4079fb-kube-proxy\") pod \"kube-proxy-mgwxc\" (UID: \"14950002-fa42-4387-94dd-f82c0c4079fb\") " pod="kube-system/kube-proxy-mgwxc" Oct 9 03:20:44.835623 kubelet[2750]: I1009 03:20:44.835612 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14950002-fa42-4387-94dd-f82c0c4079fb-lib-modules\") pod \"kube-proxy-mgwxc\" (UID: \"14950002-fa42-4387-94dd-f82c0c4079fb\") " pod="kube-system/kube-proxy-mgwxc" Oct 9 03:20:44.936228 kubelet[2750]: I1009 03:20:44.936176 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57tw\" (UniqueName: \"kubernetes.io/projected/0f022404-2577-41f4-bfdb-0a3a78551acf-kube-api-access-r57tw\") pod \"tigera-operator-5d56685c77-5j55w\" (UID: \"0f022404-2577-41f4-bfdb-0a3a78551acf\") " pod="tigera-operator/tigera-operator-5d56685c77-5j55w" Oct 9 03:20:44.936958 kubelet[2750]: I1009 03:20:44.936257 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f022404-2577-41f4-bfdb-0a3a78551acf-var-lib-calico\") pod \"tigera-operator-5d56685c77-5j55w\" (UID: \"0f022404-2577-41f4-bfdb-0a3a78551acf\") " pod="tigera-operator/tigera-operator-5d56685c77-5j55w" Oct 9 03:20:45.043259 containerd[1494]: time="2024-10-09T03:20:45.042865709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgwxc,Uid:14950002-fa42-4387-94dd-f82c0c4079fb,Namespace:kube-system,Attempt:0,}" Oct 9 03:20:45.070753 containerd[1494]: time="2024-10-09T03:20:45.070065377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:45.070865 containerd[1494]: time="2024-10-09T03:20:45.070746882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:45.070865 containerd[1494]: time="2024-10-09T03:20:45.070786938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:45.070941 containerd[1494]: time="2024-10-09T03:20:45.070903559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:45.095604 systemd[1]: Started cri-containerd-e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f.scope - libcontainer container e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f. Oct 9 03:20:45.120612 containerd[1494]: time="2024-10-09T03:20:45.120571942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgwxc,Uid:14950002-fa42-4387-94dd-f82c0c4079fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f\"" Oct 9 03:20:45.128058 containerd[1494]: time="2024-10-09T03:20:45.127933915Z" level=info msg="CreateContainer within sandbox \"e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 03:20:45.133363 containerd[1494]: time="2024-10-09T03:20:45.133312680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5j55w,Uid:0f022404-2577-41f4-bfdb-0a3a78551acf,Namespace:tigera-operator,Attempt:0,}" Oct 9 03:20:45.141922 containerd[1494]: time="2024-10-09T03:20:45.141820319Z" level=info msg="CreateContainer within sandbox \"e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a017ccac7cce95d30329f26c11e96bb6d8bd9055c53bf5352fe30a031ee9d973\"" Oct 9 03:20:45.143384 containerd[1494]: time="2024-10-09T03:20:45.142267548Z" level=info msg="StartContainer for \"a017ccac7cce95d30329f26c11e96bb6d8bd9055c53bf5352fe30a031ee9d973\"" Oct 9 03:20:45.163666 containerd[1494]: time="2024-10-09T03:20:45.162681248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:45.163666 containerd[1494]: time="2024-10-09T03:20:45.162744980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:45.163666 containerd[1494]: time="2024-10-09T03:20:45.162755399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:45.163666 containerd[1494]: time="2024-10-09T03:20:45.163112979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:45.173592 systemd[1]: Started cri-containerd-a017ccac7cce95d30329f26c11e96bb6d8bd9055c53bf5352fe30a031ee9d973.scope - libcontainer container a017ccac7cce95d30329f26c11e96bb6d8bd9055c53bf5352fe30a031ee9d973. Oct 9 03:20:45.186566 systemd[1]: Started cri-containerd-4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512.scope - libcontainer container 4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512. Oct 9 03:20:45.226330 containerd[1494]: time="2024-10-09T03:20:45.225422320Z" level=info msg="StartContainer for \"a017ccac7cce95d30329f26c11e96bb6d8bd9055c53bf5352fe30a031ee9d973\" returns successfully" Oct 9 03:20:45.238482 containerd[1494]: time="2024-10-09T03:20:45.238374471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5j55w,Uid:0f022404-2577-41f4-bfdb-0a3a78551acf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512\"" Oct 9 03:20:45.242409 containerd[1494]: time="2024-10-09T03:20:45.242386330Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 03:20:45.815359 kubelet[2750]: I1009 03:20:45.813924 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mgwxc" podStartSLOduration=1.813891436 podStartE2EDuration="1.813891436s" podCreationTimestamp="2024-10-09 03:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:20:45.81318285 +0000 UTC m=+14.172093579" watchObservedRunningTime="2024-10-09 03:20:45.813891436 +0000 UTC m=+14.172802165" Oct 9 03:20:45.953252 systemd[1]: run-containerd-runc-k8s.io-e20bbedd8346b8e60ac116d83becf455ee7d82f6f34b0380933852802c0f526f-runc.3VSbdP.mount: Deactivated successfully. Oct 9 03:20:46.695321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684571557.mount: Deactivated successfully. Oct 9 03:20:47.085778 containerd[1494]: time="2024-10-09T03:20:47.085713829Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:47.086640 containerd[1494]: time="2024-10-09T03:20:47.086458221Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136545" Oct 9 03:20:47.087370 containerd[1494]: time="2024-10-09T03:20:47.087323742Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:47.089013 containerd[1494]: time="2024-10-09T03:20:47.088978561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:47.089644 containerd[1494]: time="2024-10-09T03:20:47.089617573Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.847079074s" Oct 9 03:20:47.089688 containerd[1494]: time="2024-10-09T03:20:47.089644243Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 03:20:47.090881 containerd[1494]: time="2024-10-09T03:20:47.090849559Z" level=info msg="CreateContainer within sandbox \"4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 03:20:47.105984 containerd[1494]: time="2024-10-09T03:20:47.105949096Z" level=info msg="CreateContainer within sandbox \"4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8\"" Oct 9 03:20:47.106628 containerd[1494]: time="2024-10-09T03:20:47.106604470Z" level=info msg="StartContainer for \"5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8\"" Oct 9 03:20:47.135262 systemd[1]: run-containerd-runc-k8s.io-5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8-runc.XvzA52.mount: Deactivated successfully. Oct 9 03:20:47.147556 systemd[1]: Started cri-containerd-5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8.scope - libcontainer container 5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8. Oct 9 03:20:47.177386 containerd[1494]: time="2024-10-09T03:20:47.177302169Z" level=info msg="StartContainer for \"5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8\" returns successfully" Oct 9 03:20:50.130839 kubelet[2750]: I1009 03:20:50.129822 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-5j55w" podStartSLOduration=4.28078899 podStartE2EDuration="6.129764234s" podCreationTimestamp="2024-10-09 03:20:44 +0000 UTC" firstStartedPulling="2024-10-09 03:20:45.240905427 +0000 UTC m=+13.599816155" lastFinishedPulling="2024-10-09 03:20:47.089880661 +0000 UTC m=+15.448791399" observedRunningTime="2024-10-09 03:20:47.817989061 +0000 UTC m=+16.176899790" watchObservedRunningTime="2024-10-09 03:20:50.129764234 +0000 UTC m=+18.488674962" Oct 9 03:20:50.130839 kubelet[2750]: I1009 03:20:50.129955 2750 topology_manager.go:215] "Topology Admit Handler" podUID="1f3a36a8-5896-4891-a912-13a1400b3a47" podNamespace="calico-system" podName="calico-typha-76548b5566-x4qcp" Oct 9 03:20:50.147482 systemd[1]: Created slice kubepods-besteffort-pod1f3a36a8_5896_4891_a912_13a1400b3a47.slice - libcontainer container kubepods-besteffort-pod1f3a36a8_5896_4891_a912_13a1400b3a47.slice. Oct 9 03:20:50.170527 kubelet[2750]: I1009 03:20:50.170476 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f3a36a8-5896-4891-a912-13a1400b3a47-typha-certs\") pod \"calico-typha-76548b5566-x4qcp\" (UID: \"1f3a36a8-5896-4891-a912-13a1400b3a47\") " pod="calico-system/calico-typha-76548b5566-x4qcp" Oct 9 03:20:50.170527 kubelet[2750]: I1009 03:20:50.170529 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqgn\" (UniqueName: \"kubernetes.io/projected/1f3a36a8-5896-4891-a912-13a1400b3a47-kube-api-access-rxqgn\") pod \"calico-typha-76548b5566-x4qcp\" (UID: \"1f3a36a8-5896-4891-a912-13a1400b3a47\") " pod="calico-system/calico-typha-76548b5566-x4qcp" Oct 9 03:20:50.171589 kubelet[2750]: I1009 03:20:50.170555 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f3a36a8-5896-4891-a912-13a1400b3a47-tigera-ca-bundle\") pod \"calico-typha-76548b5566-x4qcp\" (UID: \"1f3a36a8-5896-4891-a912-13a1400b3a47\") " pod="calico-system/calico-typha-76548b5566-x4qcp" Oct 9 03:20:50.198536 kubelet[2750]: I1009 03:20:50.198486 2750 topology_manager.go:215] "Topology Admit Handler" podUID="ea6da3e1-fd33-4358-8ff0-ea60430199ec" podNamespace="calico-system" podName="calico-node-nm7ld" Oct 9 03:20:50.213961 systemd[1]: Created slice kubepods-besteffort-podea6da3e1_fd33_4358_8ff0_ea60430199ec.slice - libcontainer container kubepods-besteffort-podea6da3e1_fd33_4358_8ff0_ea60430199ec.slice. Oct 9 03:20:50.271478 kubelet[2750]: I1009 03:20:50.271421 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-policysync\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.271931 kubelet[2750]: I1009 03:20:50.271503 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-cni-net-dir\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.271931 kubelet[2750]: I1009 03:20:50.271545 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-flexvol-driver-host\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.271931 kubelet[2750]: I1009 03:20:50.271586 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-lib-modules\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.271931 kubelet[2750]: I1009 03:20:50.271621 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea6da3e1-fd33-4358-8ff0-ea60430199ec-node-certs\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.271931 kubelet[2750]: I1009 03:20:50.271653 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-var-run-calico\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272039 kubelet[2750]: I1009 03:20:50.271686 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-var-lib-calico\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272039 kubelet[2750]: I1009 03:20:50.271722 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-cni-bin-dir\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272039 kubelet[2750]: I1009 03:20:50.271759 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6da3e1-fd33-4358-8ff0-ea60430199ec-tigera-ca-bundle\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272039 kubelet[2750]: I1009 03:20:50.271817 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-xtables-lock\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272039 kubelet[2750]: I1009 03:20:50.271856 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea6da3e1-fd33-4358-8ff0-ea60430199ec-cni-log-dir\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.272150 kubelet[2750]: I1009 03:20:50.271918 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5d9\" (UniqueName: \"kubernetes.io/projected/ea6da3e1-fd33-4358-8ff0-ea60430199ec-kube-api-access-vk5d9\") pod \"calico-node-nm7ld\" (UID: \"ea6da3e1-fd33-4358-8ff0-ea60430199ec\") " pod="calico-system/calico-node-nm7ld" Oct 9 03:20:50.314556 kubelet[2750]: I1009 03:20:50.314485 2750 topology_manager.go:215] "Topology Admit Handler" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" podNamespace="calico-system" podName="csi-node-driver-8fwf8" Oct 9 03:20:50.314839 kubelet[2750]: E1009 03:20:50.314816 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:20:50.372511 kubelet[2750]: I1009 03:20:50.372428 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7783cb5-3b65-4530-8afe-0621f9daa653-kubelet-dir\") pod \"csi-node-driver-8fwf8\" (UID: \"c7783cb5-3b65-4530-8afe-0621f9daa653\") " pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:20:50.372511 kubelet[2750]: I1009 03:20:50.372503 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7783cb5-3b65-4530-8afe-0621f9daa653-socket-dir\") pod \"csi-node-driver-8fwf8\" (UID: \"c7783cb5-3b65-4530-8afe-0621f9daa653\") " pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:20:50.372749 kubelet[2750]: I1009 03:20:50.372558 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2k49\" (UniqueName: \"kubernetes.io/projected/c7783cb5-3b65-4530-8afe-0621f9daa653-kube-api-access-f2k49\") pod \"csi-node-driver-8fwf8\" (UID: \"c7783cb5-3b65-4530-8afe-0621f9daa653\") " pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:20:50.372749 kubelet[2750]: I1009 03:20:50.372694 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7783cb5-3b65-4530-8afe-0621f9daa653-varrun\") pod \"csi-node-driver-8fwf8\" (UID: \"c7783cb5-3b65-4530-8afe-0621f9daa653\") " pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:20:50.372749 kubelet[2750]: I1009 03:20:50.372728 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7783cb5-3b65-4530-8afe-0621f9daa653-registration-dir\") pod \"csi-node-driver-8fwf8\" (UID: \"c7783cb5-3b65-4530-8afe-0621f9daa653\") " pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:20:50.384799 kubelet[2750]: E1009 03:20:50.384713 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.384799 kubelet[2750]: W1009 03:20:50.384753 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.384799 kubelet[2750]: E1009 03:20:50.384772 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.389097 kubelet[2750]: E1009 03:20:50.389072 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.389097 kubelet[2750]: W1009 03:20:50.389090 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.389207 kubelet[2750]: E1009 03:20:50.389105 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.452951 containerd[1494]: time="2024-10-09T03:20:50.452910815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76548b5566-x4qcp,Uid:1f3a36a8-5896-4891-a912-13a1400b3a47,Namespace:calico-system,Attempt:0,}" Oct 9 03:20:50.475179 kubelet[2750]: E1009 03:20:50.474060 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.475179 kubelet[2750]: W1009 03:20:50.474076 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.475179 kubelet[2750]: E1009 03:20:50.474095 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.475179 kubelet[2750]: E1009 03:20:50.474310 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.475179 kubelet[2750]: W1009 03:20:50.474320 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.475179 kubelet[2750]: E1009 03:20:50.474331 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.475179 kubelet[2750]: E1009 03:20:50.474807 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.475179 kubelet[2750]: W1009 03:20:50.474816 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.476485 kubelet[2750]: E1009 03:20:50.475570 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.476779 kubelet[2750]: E1009 03:20:50.476602 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.476779 kubelet[2750]: W1009 03:20:50.476613 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.476779 kubelet[2750]: E1009 03:20:50.476713 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.476921 kubelet[2750]: E1009 03:20:50.476911 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.476979 kubelet[2750]: W1009 03:20:50.476964 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.477151 kubelet[2750]: E1009 03:20:50.477140 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.477259 kubelet[2750]: E1009 03:20:50.477250 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.477317 kubelet[2750]: W1009 03:20:50.477302 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.477454 kubelet[2750]: E1009 03:20:50.477427 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.477777 kubelet[2750]: E1009 03:20:50.477656 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.477777 kubelet[2750]: W1009 03:20:50.477665 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.477777 kubelet[2750]: E1009 03:20:50.477750 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.478093 kubelet[2750]: E1009 03:20:50.478082 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.478157 kubelet[2750]: W1009 03:20:50.478147 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.478217 kubelet[2750]: E1009 03:20:50.478209 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.478516 kubelet[2750]: E1009 03:20:50.478505 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.478600 kubelet[2750]: W1009 03:20:50.478567 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.478756 kubelet[2750]: E1009 03:20:50.478745 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.479374 kubelet[2750]: E1009 03:20:50.479145 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.479374 kubelet[2750]: W1009 03:20:50.479155 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.480471 kubelet[2750]: E1009 03:20:50.480421 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.480984 kubelet[2750]: E1009 03:20:50.480900 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.480984 kubelet[2750]: W1009 03:20:50.480910 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.481097 kubelet[2750]: E1009 03:20:50.481014 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.481239 kubelet[2750]: E1009 03:20:50.481130 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.481239 kubelet[2750]: W1009 03:20:50.481138 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.481239 kubelet[2750]: E1009 03:20:50.481204 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.481329 kubelet[2750]: E1009 03:20:50.481313 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.481329 kubelet[2750]: W1009 03:20:50.481320 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.481559 kubelet[2750]: E1009 03:20:50.481466 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.481607 kubelet[2750]: E1009 03:20:50.481586 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.481607 kubelet[2750]: W1009 03:20:50.481592 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.481768 kubelet[2750]: E1009 03:20:50.481673 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.482179 kubelet[2750]: E1009 03:20:50.482075 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.482179 kubelet[2750]: W1009 03:20:50.482086 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.482179 kubelet[2750]: E1009 03:20:50.482161 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.482987 kubelet[2750]: E1009 03:20:50.482969 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.482987 kubelet[2750]: W1009 03:20:50.482983 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.483118 kubelet[2750]: E1009 03:20:50.483067 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.483337 kubelet[2750]: E1009 03:20:50.483307 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.483337 kubelet[2750]: W1009 03:20:50.483318 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.484026 kubelet[2750]: E1009 03:20:50.483874 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.485470 kubelet[2750]: E1009 03:20:50.484526 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.485470 kubelet[2750]: W1009 03:20:50.484542 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.487816 kubelet[2750]: E1009 03:20:50.487796 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.488672 kubelet[2750]: E1009 03:20:50.488366 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.488672 kubelet[2750]: W1009 03:20:50.488374 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.488747 containerd[1494]: time="2024-10-09T03:20:50.488086935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:50.488747 containerd[1494]: time="2024-10-09T03:20:50.488185061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:50.488747 containerd[1494]: time="2024-10-09T03:20:50.488244092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:50.488964 kubelet[2750]: E1009 03:20:50.488945 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.489369 kubelet[2750]: E1009 03:20:50.489337 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.489463 kubelet[2750]: W1009 03:20:50.489422 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.489564 kubelet[2750]: E1009 03:20:50.489554 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.489975 kubelet[2750]: E1009 03:20:50.489958 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.490176 kubelet[2750]: W1009 03:20:50.490164 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.490417 kubelet[2750]: E1009 03:20:50.490379 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.490666 kubelet[2750]: E1009 03:20:50.490533 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.490666 kubelet[2750]: W1009 03:20:50.490543 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.490763 kubelet[2750]: E1009 03:20:50.490753 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.491047 kubelet[2750]: E1009 03:20:50.491010 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.491047 kubelet[2750]: W1009 03:20:50.491020 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.491298 kubelet[2750]: E1009 03:20:50.491213 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.491513 kubelet[2750]: E1009 03:20:50.491502 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.491675 kubelet[2750]: W1009 03:20:50.491548 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.491816 kubelet[2750]: E1009 03:20:50.491737 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.493470 kubelet[2750]: E1009 03:20:50.493329 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.493470 kubelet[2750]: W1009 03:20:50.493341 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.493470 kubelet[2750]: E1009 03:20:50.493353 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.494401 containerd[1494]: time="2024-10-09T03:20:50.494327428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:50.510339 kubelet[2750]: E1009 03:20:50.510315 2750 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 03:20:50.512677 kubelet[2750]: W1009 03:20:50.510599 2750 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 03:20:50.512677 kubelet[2750]: E1009 03:20:50.510638 2750 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 03:20:50.519053 containerd[1494]: time="2024-10-09T03:20:50.519020734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm7ld,Uid:ea6da3e1-fd33-4358-8ff0-ea60430199ec,Namespace:calico-system,Attempt:0,}" Oct 9 03:20:50.525579 systemd[1]: Started cri-containerd-4d6e11d273105cdcf2d8156e4acdda675ecacdc4634440f90ff32fd6826de491.scope - libcontainer container 4d6e11d273105cdcf2d8156e4acdda675ecacdc4634440f90ff32fd6826de491. Oct 9 03:20:50.563572 containerd[1494]: time="2024-10-09T03:20:50.561400962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:20:50.563572 containerd[1494]: time="2024-10-09T03:20:50.563501909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:20:50.563572 containerd[1494]: time="2024-10-09T03:20:50.563513751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:50.563928 containerd[1494]: time="2024-10-09T03:20:50.563705554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:20:50.592699 systemd[1]: Started cri-containerd-7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0.scope - libcontainer container 7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0. Oct 9 03:20:50.646566 containerd[1494]: time="2024-10-09T03:20:50.646466515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nm7ld,Uid:ea6da3e1-fd33-4358-8ff0-ea60430199ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\"" Oct 9 03:20:50.650153 containerd[1494]: time="2024-10-09T03:20:50.650128557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 03:20:50.675296 containerd[1494]: time="2024-10-09T03:20:50.675204306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76548b5566-x4qcp,Uid:1f3a36a8-5896-4891-a912-13a1400b3a47,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d6e11d273105cdcf2d8156e4acdda675ecacdc4634440f90ff32fd6826de491\"" Oct 9 03:20:51.763302 kubelet[2750]: E1009 03:20:51.762870 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:20:52.276579 containerd[1494]: time="2024-10-09T03:20:52.276507396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:52.277724 containerd[1494]: time="2024-10-09T03:20:52.277672039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 03:20:52.278918 containerd[1494]: time="2024-10-09T03:20:52.278866217Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:52.280415 containerd[1494]: time="2024-10-09T03:20:52.280379208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:52.281352 containerd[1494]: time="2024-10-09T03:20:52.281191664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.631034132s" Oct 9 03:20:52.281352 containerd[1494]: time="2024-10-09T03:20:52.281229786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 03:20:52.282386 containerd[1494]: time="2024-10-09T03:20:52.282217264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 03:20:52.283189 containerd[1494]: time="2024-10-09T03:20:52.283167711Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 03:20:52.299928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789332009.mount: Deactivated successfully. Oct 9 03:20:52.304689 containerd[1494]: time="2024-10-09T03:20:52.304644636Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b\"" Oct 9 03:20:52.305408 containerd[1494]: time="2024-10-09T03:20:52.305373795Z" level=info msg="StartContainer for \"e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b\"" Oct 9 03:20:52.337660 systemd[1]: Started cri-containerd-e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b.scope - libcontainer container e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b. Oct 9 03:20:52.375303 containerd[1494]: time="2024-10-09T03:20:52.375170595Z" level=info msg="StartContainer for \"e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b\" returns successfully" Oct 9 03:20:52.386858 systemd[1]: cri-containerd-e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b.scope: Deactivated successfully. Oct 9 03:20:52.413172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b-rootfs.mount: Deactivated successfully. Oct 9 03:20:52.426542 containerd[1494]: time="2024-10-09T03:20:52.426463944Z" level=info msg="shim disconnected" id=e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b namespace=k8s.io Oct 9 03:20:52.426767 containerd[1494]: time="2024-10-09T03:20:52.426544106Z" level=warning msg="cleaning up after shim disconnected" id=e9b620e98e5e51f2240c5a8346223b66fef33c651c8663d3650fd2d2ea1eaf0b namespace=k8s.io Oct 9 03:20:52.426767 containerd[1494]: time="2024-10-09T03:20:52.426555908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 03:20:53.760015 kubelet[2750]: E1009 03:20:53.759644 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:20:54.894353 containerd[1494]: time="2024-10-09T03:20:54.894310772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:54.895306 containerd[1494]: time="2024-10-09T03:20:54.895253413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 03:20:54.895810 containerd[1494]: time="2024-10-09T03:20:54.895782012Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:54.914172 containerd[1494]: time="2024-10-09T03:20:54.914127322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:54.916056 containerd[1494]: time="2024-10-09T03:20:54.915995442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.633749004s" Oct 9 03:20:54.916056 containerd[1494]: time="2024-10-09T03:20:54.916052089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 03:20:54.917516 containerd[1494]: time="2024-10-09T03:20:54.917455631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 03:20:54.938735 containerd[1494]: time="2024-10-09T03:20:54.938686864Z" level=info msg="CreateContainer within sandbox \"4d6e11d273105cdcf2d8156e4acdda675ecacdc4634440f90ff32fd6826de491\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 03:20:54.955697 containerd[1494]: time="2024-10-09T03:20:54.955606561Z" level=info msg="CreateContainer within sandbox \"4d6e11d273105cdcf2d8156e4acdda675ecacdc4634440f90ff32fd6826de491\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d3aacf9111d80e6d8b4864330209758a1c022fb1dd527c68cecd0c64e8cec2b1\"" Oct 9 03:20:54.957187 containerd[1494]: time="2024-10-09T03:20:54.956292807Z" level=info msg="StartContainer for \"d3aacf9111d80e6d8b4864330209758a1c022fb1dd527c68cecd0c64e8cec2b1\"" Oct 9 03:20:54.992571 systemd[1]: Started cri-containerd-d3aacf9111d80e6d8b4864330209758a1c022fb1dd527c68cecd0c64e8cec2b1.scope - libcontainer container d3aacf9111d80e6d8b4864330209758a1c022fb1dd527c68cecd0c64e8cec2b1. Oct 9 03:20:55.034507 containerd[1494]: time="2024-10-09T03:20:55.034467990Z" level=info msg="StartContainer for \"d3aacf9111d80e6d8b4864330209758a1c022fb1dd527c68cecd0c64e8cec2b1\" returns successfully" Oct 9 03:20:55.759484 kubelet[2750]: E1009 03:20:55.758798 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:20:55.865677 kubelet[2750]: I1009 03:20:55.865644 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-76548b5566-x4qcp" podStartSLOduration=1.62575508 podStartE2EDuration="5.865604171s" podCreationTimestamp="2024-10-09 03:20:50 +0000 UTC" firstStartedPulling="2024-10-09 03:20:50.676530497 +0000 UTC m=+19.035441225" lastFinishedPulling="2024-10-09 03:20:54.916379557 +0000 UTC m=+23.275290316" observedRunningTime="2024-10-09 03:20:55.863769255 +0000 UTC m=+24.222679983" watchObservedRunningTime="2024-10-09 03:20:55.865604171 +0000 UTC m=+24.224514899" Oct 9 03:20:56.855057 kubelet[2750]: I1009 03:20:56.855019 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 03:20:57.760281 kubelet[2750]: E1009 03:20:57.758643 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:20:58.138174 kubelet[2750]: I1009 03:20:58.137308 2750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 03:20:59.501567 containerd[1494]: time="2024-10-09T03:20:59.500029047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:59.505524 containerd[1494]: time="2024-10-09T03:20:59.505491358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 03:20:59.506242 containerd[1494]: time="2024-10-09T03:20:59.506201196Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:59.509826 containerd[1494]: time="2024-10-09T03:20:59.509786929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:20:59.510908 containerd[1494]: time="2024-10-09T03:20:59.510883626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.59340349s" Oct 9 03:20:59.511024 containerd[1494]: time="2024-10-09T03:20:59.511007610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 03:20:59.513281 containerd[1494]: time="2024-10-09T03:20:59.513262293Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 03:20:59.559479 containerd[1494]: time="2024-10-09T03:20:59.559291498Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958\"" Oct 9 03:20:59.561474 containerd[1494]: time="2024-10-09T03:20:59.561246213Z" level=info msg="StartContainer for \"8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958\"" Oct 9 03:20:59.651543 systemd[1]: Started cri-containerd-8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958.scope - libcontainer container 8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958. Oct 9 03:20:59.691270 containerd[1494]: time="2024-10-09T03:20:59.690994831Z" level=info msg="StartContainer for \"8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958\" returns successfully" Oct 9 03:20:59.761628 kubelet[2750]: E1009 03:20:59.761208 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:21:00.091835 systemd[1]: cri-containerd-8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958.scope: Deactivated successfully. Oct 9 03:21:00.117354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958-rootfs.mount: Deactivated successfully. Oct 9 03:21:00.134197 kubelet[2750]: I1009 03:21:00.133699 2750 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 03:21:00.150422 containerd[1494]: time="2024-10-09T03:21:00.150125662Z" level=info msg="shim disconnected" id=8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958 namespace=k8s.io Oct 9 03:21:00.150422 containerd[1494]: time="2024-10-09T03:21:00.150414939Z" level=warning msg="cleaning up after shim disconnected" id=8eeb3b2c4d70408380b60ad6e20f9f1c9d60acc3b4b6c6f6d95c01a3782a3958 namespace=k8s.io Oct 9 03:21:00.150580 containerd[1494]: time="2024-10-09T03:21:00.150424015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 03:21:00.164148 kubelet[2750]: I1009 03:21:00.162336 2750 topology_manager.go:215] "Topology Admit Handler" podUID="ba2e80de-0263-4953-b0ee-dade88e7c83a" podNamespace="kube-system" podName="coredns-76f75df574-ln8cr" Oct 9 03:21:00.165979 kubelet[2750]: I1009 03:21:00.165962 2750 topology_manager.go:215] "Topology Admit Handler" podUID="2e50c61e-8f26-4b36-9daa-1813c9c977f4" podNamespace="kube-system" podName="coredns-76f75df574-xx9v6" Oct 9 03:21:00.170126 kubelet[2750]: I1009 03:21:00.170056 2750 topology_manager.go:215] "Topology Admit Handler" podUID="07468988-3201-4dac-a2aa-9cce132fd342" podNamespace="calico-system" podName="calico-kube-controllers-9994f4c68-bc6n2" Oct 9 03:21:00.171371 kubelet[2750]: W1009 03:21:00.171322 2750 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4116-0-0-d-cd8c2d08d9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4116-0-0-d-cd8c2d08d9' and this object Oct 9 03:21:00.172491 kubelet[2750]: E1009 03:21:00.172477 2750 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4116-0-0-d-cd8c2d08d9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4116-0-0-d-cd8c2d08d9' and this object Oct 9 03:21:00.175471 systemd[1]: Created slice kubepods-burstable-podba2e80de_0263_4953_b0ee_dade88e7c83a.slice - libcontainer container kubepods-burstable-podba2e80de_0263_4953_b0ee_dade88e7c83a.slice. Oct 9 03:21:00.187962 systemd[1]: Created slice kubepods-burstable-pod2e50c61e_8f26_4b36_9daa_1813c9c977f4.slice - libcontainer container kubepods-burstable-pod2e50c61e_8f26_4b36_9daa_1813c9c977f4.slice. Oct 9 03:21:00.194991 systemd[1]: Created slice kubepods-besteffort-pod07468988_3201_4dac_a2aa_9cce132fd342.slice - libcontainer container kubepods-besteffort-pod07468988_3201_4dac_a2aa_9cce132fd342.slice. Oct 9 03:21:00.251111 kubelet[2750]: I1009 03:21:00.251037 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e50c61e-8f26-4b36-9daa-1813c9c977f4-config-volume\") pod \"coredns-76f75df574-xx9v6\" (UID: \"2e50c61e-8f26-4b36-9daa-1813c9c977f4\") " pod="kube-system/coredns-76f75df574-xx9v6" Oct 9 03:21:00.251111 kubelet[2750]: I1009 03:21:00.251106 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba2e80de-0263-4953-b0ee-dade88e7c83a-config-volume\") pod \"coredns-76f75df574-ln8cr\" (UID: \"ba2e80de-0263-4953-b0ee-dade88e7c83a\") " pod="kube-system/coredns-76f75df574-ln8cr" Oct 9 03:21:00.251575 kubelet[2750]: I1009 03:21:00.251127 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sgb\" (UniqueName: \"kubernetes.io/projected/07468988-3201-4dac-a2aa-9cce132fd342-kube-api-access-w9sgb\") pod \"calico-kube-controllers-9994f4c68-bc6n2\" (UID: \"07468988-3201-4dac-a2aa-9cce132fd342\") " pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" Oct 9 03:21:00.251575 kubelet[2750]: I1009 03:21:00.251147 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226kn\" (UniqueName: \"kubernetes.io/projected/2e50c61e-8f26-4b36-9daa-1813c9c977f4-kube-api-access-226kn\") pod \"coredns-76f75df574-xx9v6\" (UID: \"2e50c61e-8f26-4b36-9daa-1813c9c977f4\") " pod="kube-system/coredns-76f75df574-xx9v6" Oct 9 03:21:00.251575 kubelet[2750]: I1009 03:21:00.251169 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07468988-3201-4dac-a2aa-9cce132fd342-tigera-ca-bundle\") pod \"calico-kube-controllers-9994f4c68-bc6n2\" (UID: \"07468988-3201-4dac-a2aa-9cce132fd342\") " pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" Oct 9 03:21:00.251575 kubelet[2750]: I1009 03:21:00.251189 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kgw\" (UniqueName: \"kubernetes.io/projected/ba2e80de-0263-4953-b0ee-dade88e7c83a-kube-api-access-v8kgw\") pod \"coredns-76f75df574-ln8cr\" (UID: \"ba2e80de-0263-4953-b0ee-dade88e7c83a\") " pod="kube-system/coredns-76f75df574-ln8cr" Oct 9 03:21:00.499104 containerd[1494]: time="2024-10-09T03:21:00.498879404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9994f4c68-bc6n2,Uid:07468988-3201-4dac-a2aa-9cce132fd342,Namespace:calico-system,Attempt:0,}" Oct 9 03:21:00.627702 containerd[1494]: time="2024-10-09T03:21:00.627653721Z" level=error msg="Failed to destroy network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:00.629905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92-shm.mount: Deactivated successfully. Oct 9 03:21:00.634555 containerd[1494]: time="2024-10-09T03:21:00.634519004Z" level=error msg="encountered an error cleaning up failed sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:00.634616 containerd[1494]: time="2024-10-09T03:21:00.634571132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9994f4c68-bc6n2,Uid:07468988-3201-4dac-a2aa-9cce132fd342,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:00.634971 kubelet[2750]: E1009 03:21:00.634942 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:00.635026 kubelet[2750]: E1009 03:21:00.634997 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" Oct 9 03:21:00.635026 kubelet[2750]: E1009 03:21:00.635016 2750 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" Oct 9 03:21:00.635111 kubelet[2750]: E1009 03:21:00.635088 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9994f4c68-bc6n2_calico-system(07468988-3201-4dac-a2aa-9cce132fd342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9994f4c68-bc6n2_calico-system(07468988-3201-4dac-a2aa-9cce132fd342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" podUID="07468988-3201-4dac-a2aa-9cce132fd342" Oct 9 03:21:00.866175 containerd[1494]: time="2024-10-09T03:21:00.866122614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 03:21:00.867621 kubelet[2750]: I1009 03:21:00.867315 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:00.868489 containerd[1494]: time="2024-10-09T03:21:00.868150407Z" level=info msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" Oct 9 03:21:00.882089 containerd[1494]: time="2024-10-09T03:21:00.882060843Z" level=info msg="Ensure that sandbox 2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92 in task-service has been cleanup successfully" Oct 9 03:21:00.906028 containerd[1494]: time="2024-10-09T03:21:00.905963706Z" level=error msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" failed" error="failed to destroy network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:00.906182 kubelet[2750]: E1009 03:21:00.906156 2750 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:00.906258 kubelet[2750]: E1009 03:21:00.906225 2750 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92"} Oct 9 03:21:00.906285 kubelet[2750]: E1009 03:21:00.906267 2750 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07468988-3201-4dac-a2aa-9cce132fd342\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 03:21:00.906341 kubelet[2750]: E1009 03:21:00.906296 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07468988-3201-4dac-a2aa-9cce132fd342\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" podUID="07468988-3201-4dac-a2aa-9cce132fd342" Oct 9 03:21:01.353823 kubelet[2750]: E1009 03:21:01.353777 2750 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 03:21:01.353980 kubelet[2750]: E1009 03:21:01.353898 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba2e80de-0263-4953-b0ee-dade88e7c83a-config-volume podName:ba2e80de-0263-4953-b0ee-dade88e7c83a nodeName:}" failed. No retries permitted until 2024-10-09 03:21:01.853866045 +0000 UTC m=+30.212776783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba2e80de-0263-4953-b0ee-dade88e7c83a-config-volume") pod "coredns-76f75df574-ln8cr" (UID: "ba2e80de-0263-4953-b0ee-dade88e7c83a") : failed to sync configmap cache: timed out waiting for the condition Oct 9 03:21:01.354506 kubelet[2750]: E1009 03:21:01.353777 2750 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 03:21:01.354506 kubelet[2750]: E1009 03:21:01.354338 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e50c61e-8f26-4b36-9daa-1813c9c977f4-config-volume podName:2e50c61e-8f26-4b36-9daa-1813c9c977f4 nodeName:}" failed. No retries permitted until 2024-10-09 03:21:01.854314589 +0000 UTC m=+30.213225327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2e50c61e-8f26-4b36-9daa-1813c9c977f4-config-volume") pod "coredns-76f75df574-xx9v6" (UID: "2e50c61e-8f26-4b36-9daa-1813c9c977f4") : failed to sync configmap cache: timed out waiting for the condition Oct 9 03:21:01.764778 systemd[1]: Created slice kubepods-besteffort-podc7783cb5_3b65_4530_8afe_0621f9daa653.slice - libcontainer container kubepods-besteffort-podc7783cb5_3b65_4530_8afe_0621f9daa653.slice. Oct 9 03:21:01.768092 containerd[1494]: time="2024-10-09T03:21:01.768054027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fwf8,Uid:c7783cb5-3b65-4530-8afe-0621f9daa653,Namespace:calico-system,Attempt:0,}" Oct 9 03:21:01.824583 containerd[1494]: time="2024-10-09T03:21:01.824544187Z" level=error msg="Failed to destroy network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:01.824959 containerd[1494]: time="2024-10-09T03:21:01.824903876Z" level=error msg="encountered an error cleaning up failed sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:01.824959 containerd[1494]: time="2024-10-09T03:21:01.824952867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fwf8,Uid:c7783cb5-3b65-4530-8afe-0621f9daa653,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:01.826883 kubelet[2750]: E1009 03:21:01.826554 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:01.826883 kubelet[2750]: E1009 03:21:01.826605 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:21:01.826883 kubelet[2750]: E1009 03:21:01.826628 2750 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8fwf8" Oct 9 03:21:01.827064 kubelet[2750]: E1009 03:21:01.826680 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8fwf8_calico-system(c7783cb5-3b65-4530-8afe-0621f9daa653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8fwf8_calico-system(c7783cb5-3b65-4530-8afe-0621f9daa653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:21:01.828060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f-shm.mount: Deactivated successfully. Oct 9 03:21:01.869479 kubelet[2750]: I1009 03:21:01.869457 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:01.870347 containerd[1494]: time="2024-10-09T03:21:01.869934247Z" level=info msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" Oct 9 03:21:01.870347 containerd[1494]: time="2024-10-09T03:21:01.870134604Z" level=info msg="Ensure that sandbox b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f in task-service has been cleanup successfully" Oct 9 03:21:01.906334 containerd[1494]: time="2024-10-09T03:21:01.906280781Z" level=error msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" failed" error="failed to destroy network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:01.906534 kubelet[2750]: E1009 03:21:01.906499 2750 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:01.906607 kubelet[2750]: E1009 03:21:01.906542 2750 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f"} Oct 9 03:21:01.906607 kubelet[2750]: E1009 03:21:01.906572 2750 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7783cb5-3b65-4530-8afe-0621f9daa653\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 03:21:01.906607 kubelet[2750]: E1009 03:21:01.906605 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7783cb5-3b65-4530-8afe-0621f9daa653\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8fwf8" podUID="c7783cb5-3b65-4530-8afe-0621f9daa653" Oct 9 03:21:01.986900 containerd[1494]: time="2024-10-09T03:21:01.986614390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ln8cr,Uid:ba2e80de-0263-4953-b0ee-dade88e7c83a,Namespace:kube-system,Attempt:0,}" Oct 9 03:21:01.992107 containerd[1494]: time="2024-10-09T03:21:01.992067878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xx9v6,Uid:2e50c61e-8f26-4b36-9daa-1813c9c977f4,Namespace:kube-system,Attempt:0,}" Oct 9 03:21:02.063261 containerd[1494]: time="2024-10-09T03:21:02.063116387Z" level=error msg="Failed to destroy network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.063959 containerd[1494]: time="2024-10-09T03:21:02.063928106Z" level=error msg="encountered an error cleaning up failed sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.064115 containerd[1494]: time="2024-10-09T03:21:02.064087366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ln8cr,Uid:ba2e80de-0263-4953-b0ee-dade88e7c83a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.064521 kubelet[2750]: E1009 03:21:02.064497 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.065710 kubelet[2750]: E1009 03:21:02.064858 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ln8cr" Oct 9 03:21:02.065710 kubelet[2750]: E1009 03:21:02.064884 2750 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ln8cr" Oct 9 03:21:02.065710 kubelet[2750]: E1009 03:21:02.064955 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ln8cr_kube-system(ba2e80de-0263-4953-b0ee-dade88e7c83a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ln8cr_kube-system(ba2e80de-0263-4953-b0ee-dade88e7c83a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ln8cr" podUID="ba2e80de-0263-4953-b0ee-dade88e7c83a" Oct 9 03:21:02.085420 containerd[1494]: time="2024-10-09T03:21:02.085369697Z" level=error msg="Failed to destroy network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.085721 containerd[1494]: time="2024-10-09T03:21:02.085680814Z" level=error msg="encountered an error cleaning up failed sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.085763 containerd[1494]: time="2024-10-09T03:21:02.085723594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xx9v6,Uid:2e50c61e-8f26-4b36-9daa-1813c9c977f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.085932 kubelet[2750]: E1009 03:21:02.085902 2750 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.085986 kubelet[2750]: E1009 03:21:02.085951 2750 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xx9v6" Oct 9 03:21:02.085986 kubelet[2750]: E1009 03:21:02.085971 2750 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xx9v6" Oct 9 03:21:02.086051 kubelet[2750]: E1009 03:21:02.086035 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xx9v6_kube-system(2e50c61e-8f26-4b36-9daa-1813c9c977f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xx9v6_kube-system(2e50c61e-8f26-4b36-9daa-1813c9c977f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xx9v6" podUID="2e50c61e-8f26-4b36-9daa-1813c9c977f4" Oct 9 03:21:02.779037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349-shm.mount: Deactivated successfully. Oct 9 03:21:02.779156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88-shm.mount: Deactivated successfully. Oct 9 03:21:02.871851 kubelet[2750]: I1009 03:21:02.871801 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:02.872841 containerd[1494]: time="2024-10-09T03:21:02.872425240Z" level=info msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" Oct 9 03:21:02.872841 containerd[1494]: time="2024-10-09T03:21:02.872616861Z" level=info msg="Ensure that sandbox 3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349 in task-service has been cleanup successfully" Oct 9 03:21:02.875428 kubelet[2750]: I1009 03:21:02.875182 2750 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:02.876326 containerd[1494]: time="2024-10-09T03:21:02.875938911Z" level=info msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" Oct 9 03:21:02.876326 containerd[1494]: time="2024-10-09T03:21:02.876081189Z" level=info msg="Ensure that sandbox 84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88 in task-service has been cleanup successfully" Oct 9 03:21:02.918752 containerd[1494]: time="2024-10-09T03:21:02.918716454Z" level=error msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" failed" error="failed to destroy network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.919103 kubelet[2750]: E1009 03:21:02.919039 2750 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:02.919237 kubelet[2750]: E1009 03:21:02.919222 2750 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88"} Oct 9 03:21:02.919380 kubelet[2750]: E1009 03:21:02.919357 2750 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba2e80de-0263-4953-b0ee-dade88e7c83a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 03:21:02.919562 kubelet[2750]: E1009 03:21:02.919538 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba2e80de-0263-4953-b0ee-dade88e7c83a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ln8cr" podUID="ba2e80de-0263-4953-b0ee-dade88e7c83a" Oct 9 03:21:02.920949 containerd[1494]: time="2024-10-09T03:21:02.920903185Z" level=error msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" failed" error="failed to destroy network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 03:21:02.921177 kubelet[2750]: E1009 03:21:02.921086 2750 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:02.921177 kubelet[2750]: E1009 03:21:02.921111 2750 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349"} Oct 9 03:21:02.921177 kubelet[2750]: E1009 03:21:02.921136 2750 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e50c61e-8f26-4b36-9daa-1813c9c977f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 03:21:02.921177 kubelet[2750]: E1009 03:21:02.921160 2750 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e50c61e-8f26-4b36-9daa-1813c9c977f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xx9v6" podUID="2e50c61e-8f26-4b36-9daa-1813c9c977f4" Oct 9 03:21:06.545176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505076070.mount: Deactivated successfully. Oct 9 03:21:06.628218 containerd[1494]: time="2024-10-09T03:21:06.627315590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:06.647572 containerd[1494]: time="2024-10-09T03:21:06.647530245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 03:21:06.649137 containerd[1494]: time="2024-10-09T03:21:06.649082697Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:06.650426 containerd[1494]: time="2024-10-09T03:21:06.650375180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:06.653785 containerd[1494]: time="2024-10-09T03:21:06.653743852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.784833059s" Oct 9 03:21:06.653785 containerd[1494]: time="2024-10-09T03:21:06.653778076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 03:21:06.721726 containerd[1494]: time="2024-10-09T03:21:06.721673943Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 03:21:06.818195 containerd[1494]: time="2024-10-09T03:21:06.818089319Z" level=info msg="CreateContainer within sandbox \"7db18acc3652d258ca365b99e192365480a797fd45efc428093df827839b7db0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f\"" Oct 9 03:21:06.824417 containerd[1494]: time="2024-10-09T03:21:06.823139727Z" level=info msg="StartContainer for \"9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f\"" Oct 9 03:21:06.963548 systemd[1]: Started cri-containerd-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f.scope - libcontainer container 9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f. Oct 9 03:21:07.006627 containerd[1494]: time="2024-10-09T03:21:07.006585910Z" level=info msg="StartContainer for \"9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f\" returns successfully" Oct 9 03:21:07.081099 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 03:21:07.082826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 03:21:07.958524 kubelet[2750]: I1009 03:21:07.958487 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-nm7ld" podStartSLOduration=1.939926119 podStartE2EDuration="17.944407118s" podCreationTimestamp="2024-10-09 03:20:50 +0000 UTC" firstStartedPulling="2024-10-09 03:20:50.649635093 +0000 UTC m=+19.008545822" lastFinishedPulling="2024-10-09 03:21:06.654116093 +0000 UTC m=+35.013026821" observedRunningTime="2024-10-09 03:21:07.943674459 +0000 UTC m=+36.302585198" watchObservedRunningTime="2024-10-09 03:21:07.944407118 +0000 UTC m=+36.303317857" Oct 9 03:21:08.700466 kernel: bpftool[3843]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 03:21:08.955156 systemd-networkd[1381]: vxlan.calico: Link UP Oct 9 03:21:08.955165 systemd-networkd[1381]: vxlan.calico: Gained carrier Oct 9 03:21:10.267740 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Oct 9 03:21:14.758670 containerd[1494]: time="2024-10-09T03:21:14.758623386Z" level=info msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" Oct 9 03:21:14.759480 containerd[1494]: time="2024-10-09T03:21:14.759196086Z" level=info msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.829 [INFO][3995] k8s.go 608: Cleaning up netns ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.829 [INFO][3995] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" iface="eth0" netns="/var/run/netns/cni-31e1e827-9056-aa05-5288-1aa647cb4c3f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.830 [INFO][3995] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" iface="eth0" netns="/var/run/netns/cni-31e1e827-9056-aa05-5288-1aa647cb4c3f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.830 [INFO][3995] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" iface="eth0" netns="/var/run/netns/cni-31e1e827-9056-aa05-5288-1aa647cb4c3f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.830 [INFO][3995] k8s.go 615: Releasing IP address(es) ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.830 [INFO][3995] utils.go 188: Calico CNI releasing IP address ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.920 [INFO][4004] ipam_plugin.go 417: Releasing address using handleID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.922 [INFO][4004] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.923 [INFO][4004] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.929 [WARNING][4004] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.929 [INFO][4004] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.930 [INFO][4004] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:14.935567 containerd[1494]: 2024-10-09 03:21:14.933 [INFO][3995] k8s.go 621: Teardown processing complete. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:14.938606 containerd[1494]: time="2024-10-09T03:21:14.937603029Z" level=info msg="TearDown network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" successfully" Oct 9 03:21:14.938606 containerd[1494]: time="2024-10-09T03:21:14.937629811Z" level=info msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" returns successfully" Oct 9 03:21:14.938606 containerd[1494]: time="2024-10-09T03:21:14.938282586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fwf8,Uid:c7783cb5-3b65-4530-8afe-0621f9daa653,Namespace:calico-system,Attempt:1,}" Oct 9 03:21:14.939554 systemd[1]: run-netns-cni\x2d31e1e827\x2d9056\x2daa05\x2d5288\x2d1aa647cb4c3f.mount: Deactivated successfully. Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.825 [INFO][3987] k8s.go 608: Cleaning up netns ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.826 [INFO][3987] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" iface="eth0" netns="/var/run/netns/cni-e70427c4-0545-baa3-9e2b-d79a58b14e72" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.826 [INFO][3987] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" iface="eth0" netns="/var/run/netns/cni-e70427c4-0545-baa3-9e2b-d79a58b14e72" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.828 [INFO][3987] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" iface="eth0" netns="/var/run/netns/cni-e70427c4-0545-baa3-9e2b-d79a58b14e72" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.829 [INFO][3987] k8s.go 615: Releasing IP address(es) ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.829 [INFO][3987] utils.go 188: Calico CNI releasing IP address ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.920 [INFO][4003] ipam_plugin.go 417: Releasing address using handleID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.922 [INFO][4003] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.930 [INFO][4003] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.934 [WARNING][4003] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.934 [INFO][4003] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.938 [INFO][4003] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:14.945426 containerd[1494]: 2024-10-09 03:21:14.942 [INFO][3987] k8s.go 621: Teardown processing complete. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:14.947097 containerd[1494]: time="2024-10-09T03:21:14.946581555Z" level=info msg="TearDown network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" successfully" Oct 9 03:21:14.947097 containerd[1494]: time="2024-10-09T03:21:14.946598647Z" level=info msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" returns successfully" Oct 9 03:21:14.947471 containerd[1494]: time="2024-10-09T03:21:14.947276160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9994f4c68-bc6n2,Uid:07468988-3201-4dac-a2aa-9cce132fd342,Namespace:calico-system,Attempt:1,}" Oct 9 03:21:14.949217 systemd[1]: run-netns-cni\x2de70427c4\x2d0545\x2dbaa3\x2d9e2b\x2dd79a58b14e72.mount: Deactivated successfully. Oct 9 03:21:15.072919 systemd-networkd[1381]: cali98663d255c2: Link UP Oct 9 03:21:15.073932 systemd-networkd[1381]: cali98663d255c2: Gained carrier Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.002 [INFO][4017] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0 csi-node-driver- calico-system c7783cb5-3b65-4530-8afe-0621f9daa653 724 0 2024-10-09 03:20:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 csi-node-driver-8fwf8 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali98663d255c2 [] []}} ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.002 [INFO][4017] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.033 [INFO][4041] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" HandleID="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.040 [INFO][4041] ipam_plugin.go 270: Auto assigning IP ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" HandleID="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"csi-node-driver-8fwf8", "timestamp":"2024-10-09 03:21:15.032991766 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.040 [INFO][4041] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.040 [INFO][4041] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.040 [INFO][4041] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.042 [INFO][4041] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.048 [INFO][4041] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.051 [INFO][4041] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.053 [INFO][4041] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.055 [INFO][4041] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.055 [INFO][4041] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.057 [INFO][4041] ipam.go 1685: Creating new handle: k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676 Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.061 [INFO][4041] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.064 [INFO][4041] ipam.go 1216: Successfully claimed IPs: [192.168.44.129/26] block=192.168.44.128/26 handle="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.064 [INFO][4041] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.129/26] handle="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.064 [INFO][4041] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:15.102237 containerd[1494]: 2024-10-09 03:21:15.064 [INFO][4041] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.129/26] IPv6=[] ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" HandleID="k8s-pod-network.e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.067 [INFO][4017] k8s.go 386: Populated endpoint ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7783cb5-3b65-4530-8afe-0621f9daa653", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"csi-node-driver-8fwf8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali98663d255c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.068 [INFO][4017] k8s.go 387: Calico CNI using IPs: [192.168.44.129/32] ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.069 [INFO][4017] dataplane_linux.go 68: Setting the host side veth name to cali98663d255c2 ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.073 [INFO][4017] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.074 [INFO][4017] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7783cb5-3b65-4530-8afe-0621f9daa653", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676", Pod:"csi-node-driver-8fwf8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali98663d255c2", MAC:"1a:41:1c:54:84:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:15.103976 containerd[1494]: 2024-10-09 03:21:15.088 [INFO][4017] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676" Namespace="calico-system" Pod="csi-node-driver-8fwf8" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:15.125061 systemd-networkd[1381]: cali8dba60570e5: Link UP Oct 9 03:21:15.125703 systemd-networkd[1381]: cali8dba60570e5: Gained carrier Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.013 [INFO][4027] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0 calico-kube-controllers-9994f4c68- calico-system 07468988-3201-4dac-a2aa-9cce132fd342 723 0 2024-10-09 03:20:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9994f4c68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 calico-kube-controllers-9994f4c68-bc6n2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8dba60570e5 [] []}} ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.013 [INFO][4027] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.039 [INFO][4045] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" HandleID="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.046 [INFO][4045] ipam_plugin.go 270: Auto assigning IP ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" HandleID="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ae60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"calico-kube-controllers-9994f4c68-bc6n2", "timestamp":"2024-10-09 03:21:15.039199283 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.046 [INFO][4045] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.065 [INFO][4045] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.065 [INFO][4045] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.067 [INFO][4045] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.071 [INFO][4045] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.082 [INFO][4045] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.087 [INFO][4045] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.100 [INFO][4045] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.100 [INFO][4045] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.101 [INFO][4045] ipam.go 1685: Creating new handle: k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1 Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.109 [INFO][4045] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.116 [INFO][4045] ipam.go 1216: Successfully claimed IPs: [192.168.44.130/26] block=192.168.44.128/26 handle="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.116 [INFO][4045] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.130/26] handle="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.116 [INFO][4045] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:15.146686 containerd[1494]: 2024-10-09 03:21:15.116 [INFO][4045] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.130/26] IPv6=[] ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" HandleID="k8s-pod-network.70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.120 [INFO][4027] k8s.go 386: Populated endpoint ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0", GenerateName:"calico-kube-controllers-9994f4c68-", Namespace:"calico-system", SelfLink:"", UID:"07468988-3201-4dac-a2aa-9cce132fd342", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9994f4c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"calico-kube-controllers-9994f4c68-bc6n2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dba60570e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.121 [INFO][4027] k8s.go 387: Calico CNI using IPs: [192.168.44.130/32] ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.121 [INFO][4027] dataplane_linux.go 68: Setting the host side veth name to cali8dba60570e5 ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.124 [INFO][4027] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.126 [INFO][4027] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0", GenerateName:"calico-kube-controllers-9994f4c68-", Namespace:"calico-system", SelfLink:"", UID:"07468988-3201-4dac-a2aa-9cce132fd342", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9994f4c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1", Pod:"calico-kube-controllers-9994f4c68-bc6n2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dba60570e5", MAC:"42:8c:87:55:11:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:15.148869 containerd[1494]: 2024-10-09 03:21:15.140 [INFO][4027] k8s.go 500: Wrote updated endpoint to datastore ContainerID="70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1" Namespace="calico-system" Pod="calico-kube-controllers-9994f4c68-bc6n2" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:15.188632 containerd[1494]: time="2024-10-09T03:21:15.185352838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:15.188632 containerd[1494]: time="2024-10-09T03:21:15.188466497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:15.188632 containerd[1494]: time="2024-10-09T03:21:15.188477397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:15.188632 containerd[1494]: time="2024-10-09T03:21:15.188534608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:15.188827 containerd[1494]: time="2024-10-09T03:21:15.188332798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:15.188827 containerd[1494]: time="2024-10-09T03:21:15.188379587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:15.188827 containerd[1494]: time="2024-10-09T03:21:15.188393184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:15.188827 containerd[1494]: time="2024-10-09T03:21:15.188513256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:15.213584 systemd[1]: Started cri-containerd-70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1.scope - libcontainer container 70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1. Oct 9 03:21:15.215765 systemd[1]: Started cri-containerd-e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676.scope - libcontainer container e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676. Oct 9 03:21:15.240872 containerd[1494]: time="2024-10-09T03:21:15.240844483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fwf8,Uid:c7783cb5-3b65-4530-8afe-0621f9daa653,Namespace:calico-system,Attempt:1,} returns sandbox id \"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676\"" Oct 9 03:21:15.245282 containerd[1494]: time="2024-10-09T03:21:15.245094451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 03:21:15.271937 containerd[1494]: time="2024-10-09T03:21:15.271833103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9994f4c68-bc6n2,Uid:07468988-3201-4dac-a2aa-9cce132fd342,Namespace:calico-system,Attempt:1,} returns sandbox id \"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1\"" Oct 9 03:21:16.219618 systemd-networkd[1381]: cali8dba60570e5: Gained IPv6LL Oct 9 03:21:16.760180 containerd[1494]: time="2024-10-09T03:21:16.759814805Z" level=info msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.821 [INFO][4183] k8s.go 608: Cleaning up netns ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.822 [INFO][4183] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" iface="eth0" netns="/var/run/netns/cni-dea2df65-972e-9525-6c8a-514868d80a64" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.822 [INFO][4183] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" iface="eth0" netns="/var/run/netns/cni-dea2df65-972e-9525-6c8a-514868d80a64" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.823 [INFO][4183] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" iface="eth0" netns="/var/run/netns/cni-dea2df65-972e-9525-6c8a-514868d80a64" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.823 [INFO][4183] k8s.go 615: Releasing IP address(es) ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.823 [INFO][4183] utils.go 188: Calico CNI releasing IP address ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.851 [INFO][4193] ipam_plugin.go 417: Releasing address using handleID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.851 [INFO][4193] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.851 [INFO][4193] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.856 [WARNING][4193] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.856 [INFO][4193] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.858 [INFO][4193] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:16.865145 containerd[1494]: 2024-10-09 03:21:16.861 [INFO][4183] k8s.go 621: Teardown processing complete. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:16.868503 containerd[1494]: time="2024-10-09T03:21:16.866493338Z" level=info msg="TearDown network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" successfully" Oct 9 03:21:16.868503 containerd[1494]: time="2024-10-09T03:21:16.866522795Z" level=info msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" returns successfully" Oct 9 03:21:16.868153 systemd[1]: run-netns-cni\x2ddea2df65\x2d972e\x2d9525\x2d6c8a\x2d514868d80a64.mount: Deactivated successfully. Oct 9 03:21:16.868911 containerd[1494]: time="2024-10-09T03:21:16.868722008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ln8cr,Uid:ba2e80de-0263-4953-b0ee-dade88e7c83a,Namespace:kube-system,Attempt:1,}" Oct 9 03:21:16.965298 containerd[1494]: time="2024-10-09T03:21:16.964728621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:16.966544 containerd[1494]: time="2024-10-09T03:21:16.966500618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 03:21:16.968074 containerd[1494]: time="2024-10-09T03:21:16.968042992Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:16.970862 containerd[1494]: time="2024-10-09T03:21:16.970834190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:16.971489 containerd[1494]: time="2024-10-09T03:21:16.971461805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.726342997s" Oct 9 03:21:16.971543 containerd[1494]: time="2024-10-09T03:21:16.971487645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 03:21:16.974743 containerd[1494]: time="2024-10-09T03:21:16.974715057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 03:21:16.976130 containerd[1494]: time="2024-10-09T03:21:16.975996946Z" level=info msg="CreateContainer within sandbox \"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 03:21:17.001578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008697066.mount: Deactivated successfully. Oct 9 03:21:17.004265 systemd-networkd[1381]: cali8a95534e6fe: Link UP Oct 9 03:21:17.005701 systemd-networkd[1381]: cali8a95534e6fe: Gained carrier Oct 9 03:21:17.014360 containerd[1494]: time="2024-10-09T03:21:17.012195686Z" level=info msg="CreateContainer within sandbox \"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1a0971720dcd46617dfe4679d8b3b7ce4447f1f9ed8b0229a19402ae0ab03758\"" Oct 9 03:21:17.014681 containerd[1494]: time="2024-10-09T03:21:17.014659788Z" level=info msg="StartContainer for \"1a0971720dcd46617dfe4679d8b3b7ce4447f1f9ed8b0229a19402ae0ab03758\"" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.929 [INFO][4200] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0 coredns-76f75df574- kube-system ba2e80de-0263-4953-b0ee-dade88e7c83a 737 0 2024-10-09 03:20:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 coredns-76f75df574-ln8cr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a95534e6fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.929 [INFO][4200] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.954 [INFO][4214] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" HandleID="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.962 [INFO][4214] ipam_plugin.go 270: Auto assigning IP ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" HandleID="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"coredns-76f75df574-ln8cr", "timestamp":"2024-10-09 03:21:16.954568649 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.962 [INFO][4214] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.962 [INFO][4214] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.962 [INFO][4214] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.963 [INFO][4214] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.968 [INFO][4214] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.974 [INFO][4214] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.978 [INFO][4214] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.980 [INFO][4214] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.980 [INFO][4214] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.981 [INFO][4214] ipam.go 1685: Creating new handle: k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793 Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.986 [INFO][4214] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.990 [INFO][4214] ipam.go 1216: Successfully claimed IPs: [192.168.44.131/26] block=192.168.44.128/26 handle="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.990 [INFO][4214] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.131/26] handle="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.990 [INFO][4214] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:17.032461 containerd[1494]: 2024-10-09 03:21:16.990 [INFO][4214] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.131/26] IPv6=[] ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" HandleID="k8s-pod-network.e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:16.995 [INFO][4200] k8s.go 386: Populated endpoint ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ba2e80de-0263-4953-b0ee-dade88e7c83a", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"coredns-76f75df574-ln8cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a95534e6fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:16.995 [INFO][4200] k8s.go 387: Calico CNI using IPs: [192.168.44.131/32] ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:16.995 [INFO][4200] dataplane_linux.go 68: Setting the host side veth name to cali8a95534e6fe ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:17.004 [INFO][4200] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:17.005 [INFO][4200] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ba2e80de-0263-4953-b0ee-dade88e7c83a", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793", Pod:"coredns-76f75df574-ln8cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a95534e6fe", MAC:"6a:86:66:37:3b:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:17.034674 containerd[1494]: 2024-10-09 03:21:17.023 [INFO][4200] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793" Namespace="kube-system" Pod="coredns-76f75df574-ln8cr" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:17.074530 containerd[1494]: time="2024-10-09T03:21:17.074290937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:17.074530 containerd[1494]: time="2024-10-09T03:21:17.074343538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:17.074530 containerd[1494]: time="2024-10-09T03:21:17.074356314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:17.074530 containerd[1494]: time="2024-10-09T03:21:17.074457168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:17.089568 systemd[1]: Started cri-containerd-1a0971720dcd46617dfe4679d8b3b7ce4447f1f9ed8b0229a19402ae0ab03758.scope - libcontainer container 1a0971720dcd46617dfe4679d8b3b7ce4447f1f9ed8b0229a19402ae0ab03758. Oct 9 03:21:17.114015 systemd[1]: Started cri-containerd-e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793.scope - libcontainer container e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793. Oct 9 03:21:17.115607 systemd-networkd[1381]: cali98663d255c2: Gained IPv6LL Oct 9 03:21:17.169821 containerd[1494]: time="2024-10-09T03:21:17.169778044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ln8cr,Uid:ba2e80de-0263-4953-b0ee-dade88e7c83a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793\"" Oct 9 03:21:17.170239 containerd[1494]: time="2024-10-09T03:21:17.170173578Z" level=info msg="StartContainer for \"1a0971720dcd46617dfe4679d8b3b7ce4447f1f9ed8b0229a19402ae0ab03758\" returns successfully" Oct 9 03:21:17.174495 containerd[1494]: time="2024-10-09T03:21:17.174462208Z" level=info msg="CreateContainer within sandbox \"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 03:21:17.187745 containerd[1494]: time="2024-10-09T03:21:17.187532096Z" level=info msg="CreateContainer within sandbox \"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f36d0e454d8fb368be88dc0c957425bcdcdd035075793ed1065322e9f1208c14\"" Oct 9 03:21:17.188349 containerd[1494]: time="2024-10-09T03:21:17.188309860Z" level=info msg="StartContainer for \"f36d0e454d8fb368be88dc0c957425bcdcdd035075793ed1065322e9f1208c14\"" Oct 9 03:21:17.214607 systemd[1]: Started cri-containerd-f36d0e454d8fb368be88dc0c957425bcdcdd035075793ed1065322e9f1208c14.scope - libcontainer container f36d0e454d8fb368be88dc0c957425bcdcdd035075793ed1065322e9f1208c14. Oct 9 03:21:17.247196 containerd[1494]: time="2024-10-09T03:21:17.247159719Z" level=info msg="StartContainer for \"f36d0e454d8fb368be88dc0c957425bcdcdd035075793ed1065322e9f1208c14\" returns successfully" Oct 9 03:21:17.760305 containerd[1494]: time="2024-10-09T03:21:17.760190279Z" level=info msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.810 [INFO][4357] k8s.go 608: Cleaning up netns ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.810 [INFO][4357] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" iface="eth0" netns="/var/run/netns/cni-621fbf3d-d586-84ec-0f41-9ba322210abd" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.811 [INFO][4357] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" iface="eth0" netns="/var/run/netns/cni-621fbf3d-d586-84ec-0f41-9ba322210abd" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.811 [INFO][4357] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" iface="eth0" netns="/var/run/netns/cni-621fbf3d-d586-84ec-0f41-9ba322210abd" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.811 [INFO][4357] k8s.go 615: Releasing IP address(es) ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.811 [INFO][4357] utils.go 188: Calico CNI releasing IP address ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.838 [INFO][4363] ipam_plugin.go 417: Releasing address using handleID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.838 [INFO][4363] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.838 [INFO][4363] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.842 [WARNING][4363] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.842 [INFO][4363] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.844 [INFO][4363] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:17.849121 containerd[1494]: 2024-10-09 03:21:17.846 [INFO][4357] k8s.go 621: Teardown processing complete. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:17.850761 containerd[1494]: time="2024-10-09T03:21:17.849319998Z" level=info msg="TearDown network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" successfully" Oct 9 03:21:17.850761 containerd[1494]: time="2024-10-09T03:21:17.849343234Z" level=info msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" returns successfully" Oct 9 03:21:17.850761 containerd[1494]: time="2024-10-09T03:21:17.850080218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xx9v6,Uid:2e50c61e-8f26-4b36-9daa-1813c9c977f4,Namespace:kube-system,Attempt:1,}" Oct 9 03:21:17.969258 systemd-networkd[1381]: calid4dc5267e65: Link UP Oct 9 03:21:17.970251 systemd-networkd[1381]: calid4dc5267e65: Gained carrier Oct 9 03:21:17.978238 kubelet[2750]: I1009 03:21:17.976799 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ln8cr" podStartSLOduration=33.976757116 podStartE2EDuration="33.976757116s" podCreationTimestamp="2024-10-09 03:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:21:17.974479293 +0000 UTC m=+46.333390021" watchObservedRunningTime="2024-10-09 03:21:17.976757116 +0000 UTC m=+46.335667843" Oct 9 03:21:18.000318 systemd[1]: run-netns-cni\x2d621fbf3d\x2dd586\x2d84ec\x2d0f41\x2d9ba322210abd.mount: Deactivated successfully. Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.893 [INFO][4369] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0 coredns-76f75df574- kube-system 2e50c61e-8f26-4b36-9daa-1813c9c977f4 751 0 2024-10-09 03:20:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 coredns-76f75df574-xx9v6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4dc5267e65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.894 [INFO][4369] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.926 [INFO][4381] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" HandleID="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.935 [INFO][4381] ipam_plugin.go 270: Auto assigning IP ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" HandleID="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318400), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"coredns-76f75df574-xx9v6", "timestamp":"2024-10-09 03:21:17.926940445 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.935 [INFO][4381] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.935 [INFO][4381] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.935 [INFO][4381] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.936 [INFO][4381] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.940 [INFO][4381] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.943 [INFO][4381] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.944 [INFO][4381] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.946 [INFO][4381] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.946 [INFO][4381] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.948 [INFO][4381] ipam.go 1685: Creating new handle: k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.953 [INFO][4381] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.960 [INFO][4381] ipam.go 1216: Successfully claimed IPs: [192.168.44.132/26] block=192.168.44.128/26 handle="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.962 [INFO][4381] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.132/26] handle="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.962 [INFO][4381] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:18.011426 containerd[1494]: 2024-10-09 03:21:17.962 [INFO][4381] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.132/26] IPv6=[] ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" HandleID="k8s-pod-network.a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.965 [INFO][4369] k8s.go 386: Populated endpoint ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e50c61e-8f26-4b36-9daa-1813c9c977f4", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"coredns-76f75df574-xx9v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4dc5267e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.966 [INFO][4369] k8s.go 387: Calico CNI using IPs: [192.168.44.132/32] ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.966 [INFO][4369] dataplane_linux.go 68: Setting the host side veth name to calid4dc5267e65 ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.970 [INFO][4369] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.972 [INFO][4369] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e50c61e-8f26-4b36-9daa-1813c9c977f4", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d", Pod:"coredns-76f75df574-xx9v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4dc5267e65", MAC:"e2:84:11:7b:11:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:18.012015 containerd[1494]: 2024-10-09 03:21:17.995 [INFO][4369] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d" Namespace="kube-system" Pod="coredns-76f75df574-xx9v6" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:18.063246 containerd[1494]: time="2024-10-09T03:21:18.063136545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:18.063376 containerd[1494]: time="2024-10-09T03:21:18.063289580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:18.063565 containerd[1494]: time="2024-10-09T03:21:18.063483124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:18.064657 containerd[1494]: time="2024-10-09T03:21:18.064376640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:18.094353 systemd[1]: Started cri-containerd-a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d.scope - libcontainer container a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d. Oct 9 03:21:18.135769 containerd[1494]: time="2024-10-09T03:21:18.135622437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xx9v6,Uid:2e50c61e-8f26-4b36-9daa-1813c9c977f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d\"" Oct 9 03:21:18.138212 containerd[1494]: time="2024-10-09T03:21:18.138180137Z" level=info msg="CreateContainer within sandbox \"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 03:21:18.151288 containerd[1494]: time="2024-10-09T03:21:18.151191964Z" level=info msg="CreateContainer within sandbox \"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36ec40171881c99318eaf6b245c887dfde66bc7755930b30fc9fd5b1e29e1caa\"" Oct 9 03:21:18.153471 containerd[1494]: time="2024-10-09T03:21:18.153217557Z" level=info msg="StartContainer for \"36ec40171881c99318eaf6b245c887dfde66bc7755930b30fc9fd5b1e29e1caa\"" Oct 9 03:21:18.183571 systemd[1]: Started cri-containerd-36ec40171881c99318eaf6b245c887dfde66bc7755930b30fc9fd5b1e29e1caa.scope - libcontainer container 36ec40171881c99318eaf6b245c887dfde66bc7755930b30fc9fd5b1e29e1caa. Oct 9 03:21:18.214835 containerd[1494]: time="2024-10-09T03:21:18.214654081Z" level=info msg="StartContainer for \"36ec40171881c99318eaf6b245c887dfde66bc7755930b30fc9fd5b1e29e1caa\" returns successfully" Oct 9 03:21:18.908535 systemd-networkd[1381]: cali8a95534e6fe: Gained IPv6LL Oct 9 03:21:19.164289 systemd-networkd[1381]: calid4dc5267e65: Gained IPv6LL Oct 9 03:21:19.530233 containerd[1494]: time="2024-10-09T03:21:19.529794774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:19.530886 containerd[1494]: time="2024-10-09T03:21:19.530773914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 03:21:19.531611 containerd[1494]: time="2024-10-09T03:21:19.531570141Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:19.533821 containerd[1494]: time="2024-10-09T03:21:19.533759736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:19.534388 containerd[1494]: time="2024-10-09T03:21:19.534350276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.559605701s" Oct 9 03:21:19.534889 containerd[1494]: time="2024-10-09T03:21:19.534392667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 03:21:19.535162 containerd[1494]: time="2024-10-09T03:21:19.535127976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 03:21:19.554526 containerd[1494]: time="2024-10-09T03:21:19.554424082Z" level=info msg="CreateContainer within sandbox \"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 03:21:19.568402 containerd[1494]: time="2024-10-09T03:21:19.568287925Z" level=info msg="CreateContainer within sandbox \"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b\"" Oct 9 03:21:19.569981 containerd[1494]: time="2024-10-09T03:21:19.569945584Z" level=info msg="StartContainer for \"fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b\"" Oct 9 03:21:19.612618 systemd[1]: Started cri-containerd-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b.scope - libcontainer container fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b. Oct 9 03:21:19.681048 containerd[1494]: time="2024-10-09T03:21:19.681009093Z" level=info msg="StartContainer for \"fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b\" returns successfully" Oct 9 03:21:19.988692 kubelet[2750]: I1009 03:21:19.987924 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xx9v6" podStartSLOduration=35.987867157 podStartE2EDuration="35.987867157s" podCreationTimestamp="2024-10-09 03:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 03:21:18.983733165 +0000 UTC m=+47.342643913" watchObservedRunningTime="2024-10-09 03:21:19.987867157 +0000 UTC m=+48.346777885" Oct 9 03:21:20.025169 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.XHpKak.mount: Deactivated successfully. Oct 9 03:21:20.074184 kubelet[2750]: I1009 03:21:20.072625 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9994f4c68-bc6n2" podStartSLOduration=25.813883109 podStartE2EDuration="30.072585976s" podCreationTimestamp="2024-10-09 03:20:50 +0000 UTC" firstStartedPulling="2024-10-09 03:21:15.27603104 +0000 UTC m=+43.634941768" lastFinishedPulling="2024-10-09 03:21:19.534733906 +0000 UTC m=+47.893644635" observedRunningTime="2024-10-09 03:21:19.98975983 +0000 UTC m=+48.348670558" watchObservedRunningTime="2024-10-09 03:21:20.072585976 +0000 UTC m=+48.431496705" Oct 9 03:21:21.467563 containerd[1494]: time="2024-10-09T03:21:21.467484239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:21.468345 containerd[1494]: time="2024-10-09T03:21:21.468243302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 03:21:21.469133 containerd[1494]: time="2024-10-09T03:21:21.469088431Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:21.470820 containerd[1494]: time="2024-10-09T03:21:21.470783397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:21.471509 containerd[1494]: time="2024-10-09T03:21:21.471257451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.936090057s" Oct 9 03:21:21.471509 containerd[1494]: time="2024-10-09T03:21:21.471282919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 03:21:21.474519 containerd[1494]: time="2024-10-09T03:21:21.474368655Z" level=info msg="CreateContainer within sandbox \"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 03:21:21.491788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319394448.mount: Deactivated successfully. Oct 9 03:21:21.493987 containerd[1494]: time="2024-10-09T03:21:21.493945688Z" level=info msg="CreateContainer within sandbox \"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6bd83978574a0cd77d5c328c33fe6f0f5322d6157650d3dffca2032d3cdfc1e5\"" Oct 9 03:21:21.494638 containerd[1494]: time="2024-10-09T03:21:21.494568718Z" level=info msg="StartContainer for \"6bd83978574a0cd77d5c328c33fe6f0f5322d6157650d3dffca2032d3cdfc1e5\"" Oct 9 03:21:21.544700 systemd[1]: Started cri-containerd-6bd83978574a0cd77d5c328c33fe6f0f5322d6157650d3dffca2032d3cdfc1e5.scope - libcontainer container 6bd83978574a0cd77d5c328c33fe6f0f5322d6157650d3dffca2032d3cdfc1e5. Oct 9 03:21:21.578082 containerd[1494]: time="2024-10-09T03:21:21.577907293Z" level=info msg="StartContainer for \"6bd83978574a0cd77d5c328c33fe6f0f5322d6157650d3dffca2032d3cdfc1e5\" returns successfully" Oct 9 03:21:21.898207 kubelet[2750]: I1009 03:21:21.898118 2750 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 03:21:21.901566 kubelet[2750]: I1009 03:21:21.901551 2750 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 03:21:30.777047 kubelet[2750]: I1009 03:21:30.776692 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8fwf8" podStartSLOduration=34.54862832 podStartE2EDuration="40.776642211s" podCreationTimestamp="2024-10-09 03:20:50 +0000 UTC" firstStartedPulling="2024-10-09 03:21:15.243505766 +0000 UTC m=+43.602416493" lastFinishedPulling="2024-10-09 03:21:21.471519655 +0000 UTC m=+49.830430384" observedRunningTime="2024-10-09 03:21:21.992407992 +0000 UTC m=+50.351318721" watchObservedRunningTime="2024-10-09 03:21:30.776642211 +0000 UTC m=+59.135552959" Oct 9 03:21:31.749074 containerd[1494]: time="2024-10-09T03:21:31.748760997Z" level=info msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.785 [WARNING][4651] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e50c61e-8f26-4b36-9daa-1813c9c977f4", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d", Pod:"coredns-76f75df574-xx9v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4dc5267e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.785 [INFO][4651] k8s.go 608: Cleaning up netns ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.785 [INFO][4651] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" iface="eth0" netns="" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.785 [INFO][4651] k8s.go 615: Releasing IP address(es) ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.785 [INFO][4651] utils.go 188: Calico CNI releasing IP address ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.812 [INFO][4659] ipam_plugin.go 417: Releasing address using handleID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.812 [INFO][4659] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.813 [INFO][4659] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.818 [WARNING][4659] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.818 [INFO][4659] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.819 [INFO][4659] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:31.825177 containerd[1494]: 2024-10-09 03:21:31.822 [INFO][4651] k8s.go 621: Teardown processing complete. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.825657 containerd[1494]: time="2024-10-09T03:21:31.825609241Z" level=info msg="TearDown network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" successfully" Oct 9 03:21:31.825657 containerd[1494]: time="2024-10-09T03:21:31.825644659Z" level=info msg="StopPodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" returns successfully" Oct 9 03:21:31.826310 containerd[1494]: time="2024-10-09T03:21:31.826283492Z" level=info msg="RemovePodSandbox for \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" Oct 9 03:21:31.831758 containerd[1494]: time="2024-10-09T03:21:31.831722930Z" level=info msg="Forcibly stopping sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\"" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.864 [WARNING][4677] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e50c61e-8f26-4b36-9daa-1813c9c977f4", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"a642a27db444b4de7536159737bd965c64e79f4ecbfa2c314c2e751eba20799d", Pod:"coredns-76f75df574-xx9v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4dc5267e65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.864 [INFO][4677] k8s.go 608: Cleaning up netns ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.864 [INFO][4677] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" iface="eth0" netns="" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.864 [INFO][4677] k8s.go 615: Releasing IP address(es) ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.864 [INFO][4677] utils.go 188: Calico CNI releasing IP address ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.885 [INFO][4684] ipam_plugin.go 417: Releasing address using handleID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.885 [INFO][4684] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.885 [INFO][4684] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.890 [WARNING][4684] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.890 [INFO][4684] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" HandleID="k8s-pod-network.3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--xx9v6-eth0" Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.891 [INFO][4684] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:31.909714 containerd[1494]: 2024-10-09 03:21:31.894 [INFO][4677] k8s.go 621: Teardown processing complete. ContainerID="3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349" Oct 9 03:21:31.911016 containerd[1494]: time="2024-10-09T03:21:31.909762114Z" level=info msg="TearDown network for sandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" successfully" Oct 9 03:21:31.913710 containerd[1494]: time="2024-10-09T03:21:31.913651984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 03:21:31.913873 containerd[1494]: time="2024-10-09T03:21:31.913735786Z" level=info msg="RemovePodSandbox \"3355150776999e2f0f69a34b3019cb9dd625ea3abb3ed0284d9956823d38c349\" returns successfully" Oct 9 03:21:31.914530 containerd[1494]: time="2024-10-09T03:21:31.914206336Z" level=info msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.949 [WARNING][4702] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0", GenerateName:"calico-kube-controllers-9994f4c68-", Namespace:"calico-system", SelfLink:"", UID:"07468988-3201-4dac-a2aa-9cce132fd342", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9994f4c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1", Pod:"calico-kube-controllers-9994f4c68-bc6n2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dba60570e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.950 [INFO][4702] k8s.go 608: Cleaning up netns ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.950 [INFO][4702] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" iface="eth0" netns="" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.950 [INFO][4702] k8s.go 615: Releasing IP address(es) ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.950 [INFO][4702] utils.go 188: Calico CNI releasing IP address ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.971 [INFO][4709] ipam_plugin.go 417: Releasing address using handleID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.971 [INFO][4709] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.971 [INFO][4709] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.977 [WARNING][4709] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.977 [INFO][4709] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.978 [INFO][4709] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:31.988971 containerd[1494]: 2024-10-09 03:21:31.984 [INFO][4702] k8s.go 621: Teardown processing complete. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:31.989484 containerd[1494]: time="2024-10-09T03:21:31.989396254Z" level=info msg="TearDown network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" successfully" Oct 9 03:21:31.990275 containerd[1494]: time="2024-10-09T03:21:31.989425830Z" level=info msg="StopPodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" returns successfully" Oct 9 03:21:31.990599 containerd[1494]: time="2024-10-09T03:21:31.990452767Z" level=info msg="RemovePodSandbox for \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" Oct 9 03:21:31.990599 containerd[1494]: time="2024-10-09T03:21:31.990479098Z" level=info msg="Forcibly stopping sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\"" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.034 [WARNING][4727] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0", GenerateName:"calico-kube-controllers-9994f4c68-", Namespace:"calico-system", SelfLink:"", UID:"07468988-3201-4dac-a2aa-9cce132fd342", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9994f4c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"70547890f52309bf7e3e419ba14118c98bd4b5555461a57a2163f1c5e825e4d1", Pod:"calico-kube-controllers-9994f4c68-bc6n2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8dba60570e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.034 [INFO][4727] k8s.go 608: Cleaning up netns ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.034 [INFO][4727] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" iface="eth0" netns="" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.034 [INFO][4727] k8s.go 615: Releasing IP address(es) ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.034 [INFO][4727] utils.go 188: Calico CNI releasing IP address ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.054 [INFO][4733] ipam_plugin.go 417: Releasing address using handleID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.054 [INFO][4733] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.055 [INFO][4733] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.060 [WARNING][4733] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.060 [INFO][4733] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" HandleID="k8s-pod-network.2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--kube--controllers--9994f4c68--bc6n2-eth0" Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.062 [INFO][4733] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:32.066612 containerd[1494]: 2024-10-09 03:21:32.064 [INFO][4727] k8s.go 621: Teardown processing complete. ContainerID="2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92" Oct 9 03:21:32.067597 containerd[1494]: time="2024-10-09T03:21:32.066597761Z" level=info msg="TearDown network for sandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" successfully" Oct 9 03:21:32.075681 containerd[1494]: time="2024-10-09T03:21:32.075604642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 03:21:32.075681 containerd[1494]: time="2024-10-09T03:21:32.075704994Z" level=info msg="RemovePodSandbox \"2786c4fdc85d71f3da0468d2152812323d2ecf1708db6175ee8ee0732cb13a92\" returns successfully" Oct 9 03:21:32.076131 containerd[1494]: time="2024-10-09T03:21:32.076101273Z" level=info msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.114 [WARNING][4752] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ba2e80de-0263-4953-b0ee-dade88e7c83a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793", Pod:"coredns-76f75df574-ln8cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a95534e6fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.114 [INFO][4752] k8s.go 608: Cleaning up netns ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.114 [INFO][4752] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" iface="eth0" netns="" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.114 [INFO][4752] k8s.go 615: Releasing IP address(es) ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.114 [INFO][4752] utils.go 188: Calico CNI releasing IP address ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.137 [INFO][4759] ipam_plugin.go 417: Releasing address using handleID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.137 [INFO][4759] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.137 [INFO][4759] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.142 [WARNING][4759] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.143 [INFO][4759] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.144 [INFO][4759] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:32.148735 containerd[1494]: 2024-10-09 03:21:32.146 [INFO][4752] k8s.go 621: Teardown processing complete. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.149119 containerd[1494]: time="2024-10-09T03:21:32.148792093Z" level=info msg="TearDown network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" successfully" Oct 9 03:21:32.149119 containerd[1494]: time="2024-10-09T03:21:32.148825808Z" level=info msg="StopPodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" returns successfully" Oct 9 03:21:32.149539 containerd[1494]: time="2024-10-09T03:21:32.149492724Z" level=info msg="RemovePodSandbox for \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" Oct 9 03:21:32.149539 containerd[1494]: time="2024-10-09T03:21:32.149528814Z" level=info msg="Forcibly stopping sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\"" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.188 [WARNING][4777] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ba2e80de-0263-4953-b0ee-dade88e7c83a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e27384b9f9bc82570abd6d08494690fa12e0a3cee72560d32ed12a75cfb40793", Pod:"coredns-76f75df574-ln8cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a95534e6fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.189 [INFO][4777] k8s.go 608: Cleaning up netns ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.189 [INFO][4777] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" iface="eth0" netns="" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.189 [INFO][4777] k8s.go 615: Releasing IP address(es) ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.189 [INFO][4777] utils.go 188: Calico CNI releasing IP address ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.214 [INFO][4784] ipam_plugin.go 417: Releasing address using handleID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.214 [INFO][4784] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.214 [INFO][4784] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.219 [WARNING][4784] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.219 [INFO][4784] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" HandleID="k8s-pod-network.84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-coredns--76f75df574--ln8cr-eth0" Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.221 [INFO][4784] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:32.226061 containerd[1494]: 2024-10-09 03:21:32.223 [INFO][4777] k8s.go 621: Teardown processing complete. ContainerID="84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88" Oct 9 03:21:32.226581 containerd[1494]: time="2024-10-09T03:21:32.226196516Z" level=info msg="TearDown network for sandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" successfully" Oct 9 03:21:32.230230 containerd[1494]: time="2024-10-09T03:21:32.230140336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 03:21:32.230230 containerd[1494]: time="2024-10-09T03:21:32.230209338Z" level=info msg="RemovePodSandbox \"84c4efb17f4a75d4a70db17ce33648783e24c43bfedd141f1753d095e30beb88\" returns successfully" Oct 9 03:21:32.230646 containerd[1494]: time="2024-10-09T03:21:32.230610927Z" level=info msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.265 [WARNING][4803] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7783cb5-3b65-4530-8afe-0621f9daa653", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676", Pod:"csi-node-driver-8fwf8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali98663d255c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.266 [INFO][4803] k8s.go 608: Cleaning up netns ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.266 [INFO][4803] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" iface="eth0" netns="" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.266 [INFO][4803] k8s.go 615: Releasing IP address(es) ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.266 [INFO][4803] utils.go 188: Calico CNI releasing IP address ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.284 [INFO][4809] ipam_plugin.go 417: Releasing address using handleID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.284 [INFO][4809] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.284 [INFO][4809] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.289 [WARNING][4809] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.289 [INFO][4809] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.291 [INFO][4809] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:32.295797 containerd[1494]: 2024-10-09 03:21:32.293 [INFO][4803] k8s.go 621: Teardown processing complete. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.296721 containerd[1494]: time="2024-10-09T03:21:32.295836467Z" level=info msg="TearDown network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" successfully" Oct 9 03:21:32.296721 containerd[1494]: time="2024-10-09T03:21:32.295858819Z" level=info msg="StopPodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" returns successfully" Oct 9 03:21:32.296721 containerd[1494]: time="2024-10-09T03:21:32.296248485Z" level=info msg="RemovePodSandbox for \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" Oct 9 03:21:32.296721 containerd[1494]: time="2024-10-09T03:21:32.296269756Z" level=info msg="Forcibly stopping sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\"" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.330 [WARNING][4827] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7783cb5-3b65-4530-8afe-0621f9daa653", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"e56ba39def3e6208b4b314dbf7a8782c12807a2f04c0cc662646ecbbe5bfb676", Pod:"csi-node-driver-8fwf8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali98663d255c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.330 [INFO][4827] k8s.go 608: Cleaning up netns ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.330 [INFO][4827] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" iface="eth0" netns="" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.330 [INFO][4827] k8s.go 615: Releasing IP address(es) ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.330 [INFO][4827] utils.go 188: Calico CNI releasing IP address ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.349 [INFO][4833] ipam_plugin.go 417: Releasing address using handleID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.349 [INFO][4833] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.349 [INFO][4833] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.356 [WARNING][4833] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.356 [INFO][4833] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" HandleID="k8s-pod-network.b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-csi--node--driver--8fwf8-eth0" Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.357 [INFO][4833] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:32.362333 containerd[1494]: 2024-10-09 03:21:32.359 [INFO][4827] k8s.go 621: Teardown processing complete. ContainerID="b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f" Oct 9 03:21:32.362741 containerd[1494]: time="2024-10-09T03:21:32.362377133Z" level=info msg="TearDown network for sandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" successfully" Oct 9 03:21:32.365647 containerd[1494]: time="2024-10-09T03:21:32.365604753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 03:21:32.365723 containerd[1494]: time="2024-10-09T03:21:32.365689956Z" level=info msg="RemovePodSandbox \"b97ef5ca2ebe08bfe5f572b6426b58eac1f736fc44368161859f827df44cf14f\" returns successfully" Oct 9 03:21:34.911756 kubelet[2750]: I1009 03:21:34.911711 2750 topology_manager.go:215] "Topology Admit Handler" podUID="8a6e90dd-bd9d-43e3-bac3-c5191c9de756" podNamespace="calico-apiserver" podName="calico-apiserver-7c5c7c948-97swm" Oct 9 03:21:34.913050 kubelet[2750]: I1009 03:21:34.913017 2750 topology_manager.go:215] "Topology Admit Handler" podUID="1e46d77f-ec77-4eef-ac45-bf8ee1099bed" podNamespace="calico-apiserver" podName="calico-apiserver-7c5c7c948-z2427" Oct 9 03:21:34.927715 systemd[1]: Created slice kubepods-besteffort-pod8a6e90dd_bd9d_43e3_bac3_c5191c9de756.slice - libcontainer container kubepods-besteffort-pod8a6e90dd_bd9d_43e3_bac3_c5191c9de756.slice. Oct 9 03:21:34.940840 systemd[1]: Created slice kubepods-besteffort-pod1e46d77f_ec77_4eef_ac45_bf8ee1099bed.slice - libcontainer container kubepods-besteffort-pod1e46d77f_ec77_4eef_ac45_bf8ee1099bed.slice. Oct 9 03:21:35.004680 kubelet[2750]: I1009 03:21:35.004644 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2hfx\" (UniqueName: \"kubernetes.io/projected/8a6e90dd-bd9d-43e3-bac3-c5191c9de756-kube-api-access-z2hfx\") pod \"calico-apiserver-7c5c7c948-97swm\" (UID: \"8a6e90dd-bd9d-43e3-bac3-c5191c9de756\") " pod="calico-apiserver/calico-apiserver-7c5c7c948-97swm" Oct 9 03:21:35.005835 kubelet[2750]: I1009 03:21:35.005814 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a6e90dd-bd9d-43e3-bac3-c5191c9de756-calico-apiserver-certs\") pod \"calico-apiserver-7c5c7c948-97swm\" (UID: \"8a6e90dd-bd9d-43e3-bac3-c5191c9de756\") " pod="calico-apiserver/calico-apiserver-7c5c7c948-97swm" Oct 9 03:21:35.005895 kubelet[2750]: I1009 03:21:35.005864 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w48x\" (UniqueName: \"kubernetes.io/projected/1e46d77f-ec77-4eef-ac45-bf8ee1099bed-kube-api-access-5w48x\") pod \"calico-apiserver-7c5c7c948-z2427\" (UID: \"1e46d77f-ec77-4eef-ac45-bf8ee1099bed\") " pod="calico-apiserver/calico-apiserver-7c5c7c948-z2427" Oct 9 03:21:35.005925 kubelet[2750]: I1009 03:21:35.005904 2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e46d77f-ec77-4eef-ac45-bf8ee1099bed-calico-apiserver-certs\") pod \"calico-apiserver-7c5c7c948-z2427\" (UID: \"1e46d77f-ec77-4eef-ac45-bf8ee1099bed\") " pod="calico-apiserver/calico-apiserver-7c5c7c948-z2427" Oct 9 03:21:35.106741 kubelet[2750]: E1009 03:21:35.106381 2750 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 03:21:35.106741 kubelet[2750]: E1009 03:21:35.106486 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e46d77f-ec77-4eef-ac45-bf8ee1099bed-calico-apiserver-certs podName:1e46d77f-ec77-4eef-ac45-bf8ee1099bed nodeName:}" failed. No retries permitted until 2024-10-09 03:21:35.606469025 +0000 UTC m=+63.965379763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1e46d77f-ec77-4eef-ac45-bf8ee1099bed-calico-apiserver-certs") pod "calico-apiserver-7c5c7c948-z2427" (UID: "1e46d77f-ec77-4eef-ac45-bf8ee1099bed") : secret "calico-apiserver-certs" not found Oct 9 03:21:35.106741 kubelet[2750]: E1009 03:21:35.106636 2750 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 03:21:35.106741 kubelet[2750]: E1009 03:21:35.106666 2750 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6e90dd-bd9d-43e3-bac3-c5191c9de756-calico-apiserver-certs podName:8a6e90dd-bd9d-43e3-bac3-c5191c9de756 nodeName:}" failed. No retries permitted until 2024-10-09 03:21:35.606656624 +0000 UTC m=+63.965567362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8a6e90dd-bd9d-43e3-bac3-c5191c9de756-calico-apiserver-certs") pod "calico-apiserver-7c5c7c948-97swm" (UID: "8a6e90dd-bd9d-43e3-bac3-c5191c9de756") : secret "calico-apiserver-certs" not found Oct 9 03:21:35.836758 containerd[1494]: time="2024-10-09T03:21:35.836650111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c7c948-97swm,Uid:8a6e90dd-bd9d-43e3-bac3-c5191c9de756,Namespace:calico-apiserver,Attempt:0,}" Oct 9 03:21:35.847323 containerd[1494]: time="2024-10-09T03:21:35.845778610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c7c948-z2427,Uid:1e46d77f-ec77-4eef-ac45-bf8ee1099bed,Namespace:calico-apiserver,Attempt:0,}" Oct 9 03:21:35.996854 systemd-networkd[1381]: calife084ca7ed7: Link UP Oct 9 03:21:35.997406 systemd-networkd[1381]: calife084ca7ed7: Gained carrier Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.894 [INFO][4867] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0 calico-apiserver-7c5c7c948- calico-apiserver 8a6e90dd-bd9d-43e3-bac3-c5191c9de756 875 0 2024-10-09 03:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5c7c948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 calico-apiserver-7c5c7c948-97swm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calife084ca7ed7 [] []}} ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.894 [INFO][4867] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.936 [INFO][4890] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" HandleID="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.948 [INFO][4890] ipam_plugin.go 270: Auto assigning IP ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" HandleID="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050b6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"calico-apiserver-7c5c7c948-97swm", "timestamp":"2024-10-09 03:21:35.936886777 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.948 [INFO][4890] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.948 [INFO][4890] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.948 [INFO][4890] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.951 [INFO][4890] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.955 [INFO][4890] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.959 [INFO][4890] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.960 [INFO][4890] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.962 [INFO][4890] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.962 [INFO][4890] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.963 [INFO][4890] ipam.go 1685: Creating new handle: k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.967 [INFO][4890] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.978 [INFO][4890] ipam.go 1216: Successfully claimed IPs: [192.168.44.133/26] block=192.168.44.128/26 handle="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.978 [INFO][4890] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.133/26] handle="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.978 [INFO][4890] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:36.024959 containerd[1494]: 2024-10-09 03:21:35.978 [INFO][4890] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.133/26] IPv6=[] ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" HandleID="k8s-pod-network.8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:35.986 [INFO][4867] k8s.go 386: Populated endpoint ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0", GenerateName:"calico-apiserver-7c5c7c948-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a6e90dd-bd9d-43e3-bac3-c5191c9de756", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c7c948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"calico-apiserver-7c5c7c948-97swm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife084ca7ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:35.987 [INFO][4867] k8s.go 387: Calico CNI using IPs: [192.168.44.133/32] ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:35.987 [INFO][4867] dataplane_linux.go 68: Setting the host side veth name to calife084ca7ed7 ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:35.997 [INFO][4867] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:35.998 [INFO][4867] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0", GenerateName:"calico-apiserver-7c5c7c948-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a6e90dd-bd9d-43e3-bac3-c5191c9de756", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c7c948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e", Pod:"calico-apiserver-7c5c7c948-97swm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife084ca7ed7", MAC:"9a:c4:a2:2a:b5:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:36.026779 containerd[1494]: 2024-10-09 03:21:36.009 [INFO][4867] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-97swm" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--97swm-eth0" Oct 9 03:21:36.073297 containerd[1494]: time="2024-10-09T03:21:36.073214750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:36.073787 containerd[1494]: time="2024-10-09T03:21:36.073747378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:36.074115 containerd[1494]: time="2024-10-09T03:21:36.073808545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:36.074115 containerd[1494]: time="2024-10-09T03:21:36.073990453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:36.100054 systemd-networkd[1381]: caliccca89113fc: Link UP Oct 9 03:21:36.103757 systemd-networkd[1381]: caliccca89113fc: Gained carrier Oct 9 03:21:36.104593 systemd[1]: Started cri-containerd-8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e.scope - libcontainer container 8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e. Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.903 [INFO][4875] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0 calico-apiserver-7c5c7c948- calico-apiserver 1e46d77f-ec77-4eef-ac45-bf8ee1099bed 877 0 2024-10-09 03:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5c7c948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-d-cd8c2d08d9 calico-apiserver-7c5c7c948-z2427 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccca89113fc [] []}} ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.903 [INFO][4875] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.948 [INFO][4895] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" HandleID="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.956 [INFO][4895] ipam_plugin.go 270: Auto assigning IP ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" HandleID="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-d-cd8c2d08d9", "pod":"calico-apiserver-7c5c7c948-z2427", "timestamp":"2024-10-09 03:21:35.948322376 +0000 UTC"}, Hostname:"ci-4116-0-0-d-cd8c2d08d9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.956 [INFO][4895] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.981 [INFO][4895] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.981 [INFO][4895] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-d-cd8c2d08d9' Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:35.985 [INFO][4895] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.001 [INFO][4895] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.019 [INFO][4895] ipam.go 489: Trying affinity for 192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.032 [INFO][4895] ipam.go 155: Attempting to load block cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.048 [INFO][4895] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.128/26 host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.048 [INFO][4895] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.128/26 handle="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.051 [INFO][4895] ipam.go 1685: Creating new handle: k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62 Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.058 [INFO][4895] ipam.go 1203: Writing block in order to claim IPs block=192.168.44.128/26 handle="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.074 [INFO][4895] ipam.go 1216: Successfully claimed IPs: [192.168.44.134/26] block=192.168.44.128/26 handle="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.074 [INFO][4895] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.134/26] handle="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" host="ci-4116-0-0-d-cd8c2d08d9" Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.074 [INFO][4895] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 03:21:36.125620 containerd[1494]: 2024-10-09 03:21:36.074 [INFO][4895] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.44.134/26] IPv6=[] ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" HandleID="k8s-pod-network.df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Workload="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.084 [INFO][4875] k8s.go 386: Populated endpoint ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0", GenerateName:"calico-apiserver-7c5c7c948-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e46d77f-ec77-4eef-ac45-bf8ee1099bed", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c7c948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"", Pod:"calico-apiserver-7c5c7c948-z2427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccca89113fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.085 [INFO][4875] k8s.go 387: Calico CNI using IPs: [192.168.44.134/32] ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.085 [INFO][4875] dataplane_linux.go 68: Setting the host side veth name to caliccca89113fc ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.105 [INFO][4875] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.109 [INFO][4875] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0", GenerateName:"calico-apiserver-7c5c7c948-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e46d77f-ec77-4eef-ac45-bf8ee1099bed", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 3, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c7c948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-d-cd8c2d08d9", ContainerID:"df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62", Pod:"calico-apiserver-7c5c7c948-z2427", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccca89113fc", MAC:"f2:0f:29:e7:92:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 03:21:36.126857 containerd[1494]: 2024-10-09 03:21:36.117 [INFO][4875] k8s.go 500: Wrote updated endpoint to datastore ContainerID="df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c7c948-z2427" WorkloadEndpoint="ci--4116--0--0--d--cd8c2d08d9-k8s-calico--apiserver--7c5c7c948--z2427-eth0" Oct 9 03:21:36.156254 containerd[1494]: time="2024-10-09T03:21:36.156176484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 03:21:36.156254 containerd[1494]: time="2024-10-09T03:21:36.156244253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 03:21:36.156254 containerd[1494]: time="2024-10-09T03:21:36.156258721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:36.156622 containerd[1494]: time="2024-10-09T03:21:36.156335847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 03:21:36.198825 containerd[1494]: time="2024-10-09T03:21:36.198607245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c7c948-97swm,Uid:8a6e90dd-bd9d-43e3-bac3-c5191c9de756,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e\"" Oct 9 03:21:36.199611 systemd[1]: Started cri-containerd-df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62.scope - libcontainer container df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62. Oct 9 03:21:36.202205 containerd[1494]: time="2024-10-09T03:21:36.201784907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 03:21:36.237364 containerd[1494]: time="2024-10-09T03:21:36.237281158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c7c948-z2427,Uid:1e46d77f-ec77-4eef-ac45-bf8ee1099bed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62\"" Oct 9 03:21:37.915640 systemd-networkd[1381]: caliccca89113fc: Gained IPv6LL Oct 9 03:21:37.979977 systemd-networkd[1381]: calife084ca7ed7: Gained IPv6LL Oct 9 03:21:39.042512 containerd[1494]: time="2024-10-09T03:21:39.042452956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:39.043621 containerd[1494]: time="2024-10-09T03:21:39.043460127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 03:21:39.044729 containerd[1494]: time="2024-10-09T03:21:39.044675356Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:39.046599 containerd[1494]: time="2024-10-09T03:21:39.046559692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:39.047466 containerd[1494]: time="2024-10-09T03:21:39.047070377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.845252297s" Oct 9 03:21:39.047466 containerd[1494]: time="2024-10-09T03:21:39.047094983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 03:21:39.048367 containerd[1494]: time="2024-10-09T03:21:39.047918274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 03:21:39.049624 containerd[1494]: time="2024-10-09T03:21:39.049601356Z" level=info msg="CreateContainer within sandbox \"8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 03:21:39.060282 containerd[1494]: time="2024-10-09T03:21:39.059516086Z" level=info msg="CreateContainer within sandbox \"8955319b937b6535e999430e94cf99c3a214cc1c203ac35238f3263a6ba6659e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"035ac7c703aa8aa5781ec044e1ec230660510c0a45fdd5cee4803c710012982a\"" Oct 9 03:21:39.066741 containerd[1494]: time="2024-10-09T03:21:39.066611677Z" level=info msg="StartContainer for \"035ac7c703aa8aa5781ec044e1ec230660510c0a45fdd5cee4803c710012982a\"" Oct 9 03:21:39.067048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195954637.mount: Deactivated successfully. Oct 9 03:21:39.103569 systemd[1]: Started cri-containerd-035ac7c703aa8aa5781ec044e1ec230660510c0a45fdd5cee4803c710012982a.scope - libcontainer container 035ac7c703aa8aa5781ec044e1ec230660510c0a45fdd5cee4803c710012982a. Oct 9 03:21:39.145765 containerd[1494]: time="2024-10-09T03:21:39.145708714Z" level=info msg="StartContainer for \"035ac7c703aa8aa5781ec044e1ec230660510c0a45fdd5cee4803c710012982a\" returns successfully" Oct 9 03:21:39.439531 containerd[1494]: time="2024-10-09T03:21:39.438822541Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 03:21:39.441090 containerd[1494]: time="2024-10-09T03:21:39.440330078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 03:21:39.442385 containerd[1494]: time="2024-10-09T03:21:39.442135595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 394.194986ms" Oct 9 03:21:39.442385 containerd[1494]: time="2024-10-09T03:21:39.442158998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 03:21:39.444656 containerd[1494]: time="2024-10-09T03:21:39.444546756Z" level=info msg="CreateContainer within sandbox \"df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 03:21:39.460079 containerd[1494]: time="2024-10-09T03:21:39.460042926Z" level=info msg="CreateContainer within sandbox \"df44d9f4870c0fd9a3458fbf534fae411c404331284c493b27fae7f3f8502c62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"33b688e5c941c639409e4b295c8fdbb98b9779dbd653549d5d75c65584f97b95\"" Oct 9 03:21:39.461506 containerd[1494]: time="2024-10-09T03:21:39.460663150Z" level=info msg="StartContainer for \"33b688e5c941c639409e4b295c8fdbb98b9779dbd653549d5d75c65584f97b95\"" Oct 9 03:21:39.489576 systemd[1]: Started cri-containerd-33b688e5c941c639409e4b295c8fdbb98b9779dbd653549d5d75c65584f97b95.scope - libcontainer container 33b688e5c941c639409e4b295c8fdbb98b9779dbd653549d5d75c65584f97b95. Oct 9 03:21:39.545323 containerd[1494]: time="2024-10-09T03:21:39.545236026Z" level=info msg="StartContainer for \"33b688e5c941c639409e4b295c8fdbb98b9779dbd653549d5d75c65584f97b95\" returns successfully" Oct 9 03:21:40.051234 kubelet[2750]: I1009 03:21:40.051208 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5c7c948-97swm" podStartSLOduration=3.204243909 podStartE2EDuration="6.051175822s" podCreationTimestamp="2024-10-09 03:21:34 +0000 UTC" firstStartedPulling="2024-10-09 03:21:36.200831224 +0000 UTC m=+64.559741953" lastFinishedPulling="2024-10-09 03:21:39.047763138 +0000 UTC m=+67.406673866" observedRunningTime="2024-10-09 03:21:40.050958568 +0000 UTC m=+68.409869296" watchObservedRunningTime="2024-10-09 03:21:40.051175822 +0000 UTC m=+68.410086550" Oct 9 03:21:40.053323 kubelet[2750]: I1009 03:21:40.052680 2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5c7c948-z2427" podStartSLOduration=2.849956379 podStartE2EDuration="6.052654353s" podCreationTimestamp="2024-10-09 03:21:34 +0000 UTC" firstStartedPulling="2024-10-09 03:21:36.239977741 +0000 UTC m=+64.598888469" lastFinishedPulling="2024-10-09 03:21:39.442675715 +0000 UTC m=+67.801586443" observedRunningTime="2024-10-09 03:21:40.040121527 +0000 UTC m=+68.399032255" watchObservedRunningTime="2024-10-09 03:21:40.052654353 +0000 UTC m=+68.411565081" Oct 9 03:22:00.724736 systemd[1]: run-containerd-runc-k8s.io-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f-runc.P7v3Yl.mount: Deactivated successfully. Oct 9 03:22:03.518292 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.vL7Y7N.mount: Deactivated successfully. Oct 9 03:23:13.890708 systemd[1]: Started sshd@7-188.245.48.63:22-139.178.68.195:60306.service - OpenSSH per-connection server daemon (139.178.68.195:60306). Oct 9 03:23:14.906551 sshd[5353]: Accepted publickey for core from 139.178.68.195 port 60306 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:14.909358 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:14.916384 systemd-logind[1475]: New session 8 of user core. Oct 9 03:23:14.923584 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 03:23:15.982856 sshd[5353]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:15.988121 systemd[1]: sshd@7-188.245.48.63:22-139.178.68.195:60306.service: Deactivated successfully. Oct 9 03:23:15.990675 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 03:23:15.991464 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Oct 9 03:23:15.992586 systemd-logind[1475]: Removed session 8. Oct 9 03:23:21.152232 systemd[1]: Started sshd@8-188.245.48.63:22-139.178.68.195:35256.service - OpenSSH per-connection server daemon (139.178.68.195:35256). Oct 9 03:23:22.155674 sshd[5375]: Accepted publickey for core from 139.178.68.195 port 35256 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:22.161659 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:22.167250 systemd-logind[1475]: New session 9 of user core. Oct 9 03:23:22.170594 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 03:23:22.918855 sshd[5375]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:22.923275 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Oct 9 03:23:22.923625 systemd[1]: sshd@8-188.245.48.63:22-139.178.68.195:35256.service: Deactivated successfully. Oct 9 03:23:22.925865 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 03:23:22.927390 systemd-logind[1475]: Removed session 9. Oct 9 03:23:28.092836 systemd[1]: Started sshd@9-188.245.48.63:22-139.178.68.195:35260.service - OpenSSH per-connection server daemon (139.178.68.195:35260). Oct 9 03:23:29.081005 sshd[5394]: Accepted publickey for core from 139.178.68.195 port 35260 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:29.082491 sshd[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:29.086485 systemd-logind[1475]: New session 10 of user core. Oct 9 03:23:29.092567 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 03:23:29.824332 sshd[5394]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:29.827747 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Oct 9 03:23:29.828613 systemd[1]: sshd@9-188.245.48.63:22-139.178.68.195:35260.service: Deactivated successfully. Oct 9 03:23:29.830991 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 03:23:29.831916 systemd-logind[1475]: Removed session 10. Oct 9 03:23:34.995521 systemd[1]: Started sshd@10-188.245.48.63:22-139.178.68.195:34986.service - OpenSSH per-connection server daemon (139.178.68.195:34986). Oct 9 03:23:36.009794 sshd[5454]: Accepted publickey for core from 139.178.68.195 port 34986 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:36.011474 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:36.016359 systemd-logind[1475]: New session 11 of user core. Oct 9 03:23:36.020564 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 03:23:36.785704 sshd[5454]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:36.790294 systemd[1]: sshd@10-188.245.48.63:22-139.178.68.195:34986.service: Deactivated successfully. Oct 9 03:23:36.793633 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 03:23:36.794782 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Oct 9 03:23:36.796509 systemd-logind[1475]: Removed session 11. Oct 9 03:23:41.956883 systemd[1]: Started sshd@11-188.245.48.63:22-139.178.68.195:42024.service - OpenSSH per-connection server daemon (139.178.68.195:42024). Oct 9 03:23:42.949222 sshd[5473]: Accepted publickey for core from 139.178.68.195 port 42024 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:42.951313 sshd[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:42.955901 systemd-logind[1475]: New session 12 of user core. Oct 9 03:23:42.961676 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 03:23:43.692608 sshd[5473]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:43.696499 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Oct 9 03:23:43.697225 systemd[1]: sshd@11-188.245.48.63:22-139.178.68.195:42024.service: Deactivated successfully. Oct 9 03:23:43.699197 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 03:23:43.700194 systemd-logind[1475]: Removed session 12. Oct 9 03:23:48.865371 systemd[1]: Started sshd@12-188.245.48.63:22-139.178.68.195:42038.service - OpenSSH per-connection server daemon (139.178.68.195:42038). Oct 9 03:23:49.856857 sshd[5514]: Accepted publickey for core from 139.178.68.195 port 42038 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:49.859046 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:49.867113 systemd-logind[1475]: New session 13 of user core. Oct 9 03:23:49.869776 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 03:23:50.614263 sshd[5514]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:50.617154 systemd[1]: sshd@12-188.245.48.63:22-139.178.68.195:42038.service: Deactivated successfully. Oct 9 03:23:50.619251 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 03:23:50.620674 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Oct 9 03:23:50.621983 systemd-logind[1475]: Removed session 13. Oct 9 03:23:55.788185 systemd[1]: Started sshd@13-188.245.48.63:22-139.178.68.195:58960.service - OpenSSH per-connection server daemon (139.178.68.195:58960). Oct 9 03:23:56.786171 sshd[5534]: Accepted publickey for core from 139.178.68.195 port 58960 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:23:56.787816 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:23:56.792179 systemd-logind[1475]: New session 14 of user core. Oct 9 03:23:56.798565 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 03:23:57.531604 sshd[5534]: pam_unix(sshd:session): session closed for user core Oct 9 03:23:57.535863 systemd[1]: sshd@13-188.245.48.63:22-139.178.68.195:58960.service: Deactivated successfully. Oct 9 03:23:57.539193 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 03:23:57.540323 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Oct 9 03:23:57.541820 systemd-logind[1475]: Removed session 14. Oct 9 03:24:00.712378 systemd[1]: run-containerd-runc-k8s.io-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f-runc.NF2Qm8.mount: Deactivated successfully. Oct 9 03:24:02.709700 systemd[1]: Started sshd@14-188.245.48.63:22-139.178.68.195:43594.service - OpenSSH per-connection server daemon (139.178.68.195:43594). Oct 9 03:24:03.494871 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.cjwW93.mount: Deactivated successfully. Oct 9 03:24:03.705463 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 43594 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:03.707466 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:03.717168 systemd-logind[1475]: New session 15 of user core. Oct 9 03:24:03.723638 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 03:24:04.466126 sshd[5575]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:04.470389 systemd[1]: sshd@14-188.245.48.63:22-139.178.68.195:43594.service: Deactivated successfully. Oct 9 03:24:04.472823 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 03:24:04.473965 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Oct 9 03:24:04.475217 systemd-logind[1475]: Removed session 15. Oct 9 03:24:09.637456 systemd[1]: Started sshd@15-188.245.48.63:22-139.178.68.195:43598.service - OpenSSH per-connection server daemon (139.178.68.195:43598). Oct 9 03:24:10.627400 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 43598 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:10.628881 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:10.633004 systemd-logind[1475]: New session 16 of user core. Oct 9 03:24:10.639631 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 03:24:11.370694 sshd[5615]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:11.374083 systemd[1]: sshd@15-188.245.48.63:22-139.178.68.195:43598.service: Deactivated successfully. Oct 9 03:24:11.376078 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 03:24:11.377320 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Oct 9 03:24:11.378677 systemd-logind[1475]: Removed session 16. Oct 9 03:24:16.542585 systemd[1]: Started sshd@16-188.245.48.63:22-139.178.68.195:42622.service - OpenSSH per-connection server daemon (139.178.68.195:42622). Oct 9 03:24:17.537338 sshd[5633]: Accepted publickey for core from 139.178.68.195 port 42622 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:17.539392 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:17.544417 systemd-logind[1475]: New session 17 of user core. Oct 9 03:24:17.549596 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 03:24:18.299963 sshd[5633]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:18.304122 systemd[1]: sshd@16-188.245.48.63:22-139.178.68.195:42622.service: Deactivated successfully. Oct 9 03:24:18.306416 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 03:24:18.307230 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Oct 9 03:24:18.308938 systemd-logind[1475]: Removed session 17. Oct 9 03:24:23.469345 systemd[1]: Started sshd@17-188.245.48.63:22-139.178.68.195:53074.service - OpenSSH per-connection server daemon (139.178.68.195:53074). Oct 9 03:24:24.469101 sshd[5664]: Accepted publickey for core from 139.178.68.195 port 53074 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:24.471251 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:24.477280 systemd-logind[1475]: New session 18 of user core. Oct 9 03:24:24.481593 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 03:24:25.233018 sshd[5664]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:25.237262 systemd[1]: sshd@17-188.245.48.63:22-139.178.68.195:53074.service: Deactivated successfully. Oct 9 03:24:25.239665 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 03:24:25.240284 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Oct 9 03:24:25.241307 systemd-logind[1475]: Removed session 18. Oct 9 03:24:30.405600 systemd[1]: Started sshd@18-188.245.48.63:22-139.178.68.195:53076.service - OpenSSH per-connection server daemon (139.178.68.195:53076). Oct 9 03:24:31.415188 sshd[5683]: Accepted publickey for core from 139.178.68.195 port 53076 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:31.417805 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:31.422196 systemd-logind[1475]: New session 19 of user core. Oct 9 03:24:31.425574 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 03:24:32.180335 sshd[5683]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:32.183015 systemd[1]: sshd@18-188.245.48.63:22-139.178.68.195:53076.service: Deactivated successfully. Oct 9 03:24:32.185372 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 03:24:32.186823 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Oct 9 03:24:32.188650 systemd-logind[1475]: Removed session 19. Oct 9 03:24:33.502913 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.CzJ5w2.mount: Deactivated successfully. Oct 9 03:24:37.352634 systemd[1]: Started sshd@19-188.245.48.63:22-139.178.68.195:60124.service - OpenSSH per-connection server daemon (139.178.68.195:60124). Oct 9 03:24:38.352239 sshd[5740]: Accepted publickey for core from 139.178.68.195 port 60124 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:38.354410 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:38.359131 systemd-logind[1475]: New session 20 of user core. Oct 9 03:24:38.362604 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 03:24:39.109774 sshd[5740]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:39.113192 systemd[1]: sshd@19-188.245.48.63:22-139.178.68.195:60124.service: Deactivated successfully. Oct 9 03:24:39.115217 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 03:24:39.115916 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Oct 9 03:24:39.117274 systemd-logind[1475]: Removed session 20. Oct 9 03:24:44.284646 systemd[1]: Started sshd@20-188.245.48.63:22-139.178.68.195:42650.service - OpenSSH per-connection server daemon (139.178.68.195:42650). Oct 9 03:24:44.467870 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.qxTp8o.mount: Deactivated successfully. Oct 9 03:24:45.269815 sshd[5759]: Accepted publickey for core from 139.178.68.195 port 42650 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:45.272979 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:45.277327 systemd-logind[1475]: New session 21 of user core. Oct 9 03:24:45.282582 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 03:24:46.030963 sshd[5759]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:46.034948 systemd[1]: sshd@20-188.245.48.63:22-139.178.68.195:42650.service: Deactivated successfully. Oct 9 03:24:46.038342 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 03:24:46.039390 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Oct 9 03:24:46.040604 systemd-logind[1475]: Removed session 21. Oct 9 03:24:51.205510 systemd[1]: Started sshd@21-188.245.48.63:22-139.178.68.195:54112.service - OpenSSH per-connection server daemon (139.178.68.195:54112). Oct 9 03:24:52.227501 sshd[5801]: Accepted publickey for core from 139.178.68.195 port 54112 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:52.228914 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:52.233254 systemd-logind[1475]: New session 22 of user core. Oct 9 03:24:52.238600 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 03:24:52.992882 sshd[5801]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:52.996779 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Oct 9 03:24:52.997681 systemd[1]: sshd@21-188.245.48.63:22-139.178.68.195:54112.service: Deactivated successfully. Oct 9 03:24:53.000350 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 03:24:53.001401 systemd-logind[1475]: Removed session 22. Oct 9 03:24:58.165681 systemd[1]: Started sshd@22-188.245.48.63:22-139.178.68.195:54122.service - OpenSSH per-connection server daemon (139.178.68.195:54122). Oct 9 03:24:59.173504 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 54122 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:24:59.175204 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:24:59.180599 systemd-logind[1475]: New session 23 of user core. Oct 9 03:24:59.184583 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 03:24:59.948273 sshd[5815]: pam_unix(sshd:session): session closed for user core Oct 9 03:24:59.952223 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Oct 9 03:24:59.953563 systemd[1]: sshd@22-188.245.48.63:22-139.178.68.195:54122.service: Deactivated successfully. Oct 9 03:24:59.955773 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 03:24:59.956986 systemd-logind[1475]: Removed session 23. Oct 9 03:25:00.711113 systemd[1]: run-containerd-runc-k8s.io-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f-runc.tMUVTI.mount: Deactivated successfully. Oct 9 03:25:05.119519 systemd[1]: Started sshd@23-188.245.48.63:22-139.178.68.195:49808.service - OpenSSH per-connection server daemon (139.178.68.195:49808). Oct 9 03:25:06.113549 sshd[5879]: Accepted publickey for core from 139.178.68.195 port 49808 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:06.115902 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:06.120783 systemd-logind[1475]: New session 24 of user core. Oct 9 03:25:06.125576 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 03:25:06.877327 sshd[5879]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:06.882139 systemd[1]: sshd@23-188.245.48.63:22-139.178.68.195:49808.service: Deactivated successfully. Oct 9 03:25:06.884904 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 03:25:06.885910 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Oct 9 03:25:06.887130 systemd-logind[1475]: Removed session 24. Oct 9 03:25:12.057844 systemd[1]: Started sshd@24-188.245.48.63:22-139.178.68.195:48876.service - OpenSSH per-connection server daemon (139.178.68.195:48876). Oct 9 03:25:13.088524 sshd[5898]: Accepted publickey for core from 139.178.68.195 port 48876 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:13.090213 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:13.095147 systemd-logind[1475]: New session 25 of user core. Oct 9 03:25:13.100566 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 03:25:13.870634 sshd[5898]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:13.874004 systemd[1]: sshd@24-188.245.48.63:22-139.178.68.195:48876.service: Deactivated successfully. Oct 9 03:25:13.876302 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 03:25:13.877936 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. Oct 9 03:25:13.879236 systemd-logind[1475]: Removed session 25. Oct 9 03:25:19.047735 systemd[1]: Started sshd@25-188.245.48.63:22-139.178.68.195:48880.service - OpenSSH per-connection server daemon (139.178.68.195:48880). Oct 9 03:25:20.043846 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 48880 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:20.045477 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:20.050630 systemd-logind[1475]: New session 26 of user core. Oct 9 03:25:20.058568 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 03:25:20.793611 sshd[5914]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:20.796559 systemd[1]: sshd@25-188.245.48.63:22-139.178.68.195:48880.service: Deactivated successfully. Oct 9 03:25:20.799152 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 03:25:20.800512 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. Oct 9 03:25:20.801759 systemd-logind[1475]: Removed session 26. Oct 9 03:25:25.968062 systemd[1]: Started sshd@26-188.245.48.63:22-139.178.68.195:48810.service - OpenSSH per-connection server daemon (139.178.68.195:48810). Oct 9 03:25:26.969639 sshd[5933]: Accepted publickey for core from 139.178.68.195 port 48810 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:26.970354 sshd[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:26.978221 systemd-logind[1475]: New session 27 of user core. Oct 9 03:25:26.982610 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 03:25:27.712987 sshd[5933]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:27.716523 systemd-logind[1475]: Session 27 logged out. Waiting for processes to exit. Oct 9 03:25:27.717386 systemd[1]: sshd@26-188.245.48.63:22-139.178.68.195:48810.service: Deactivated successfully. Oct 9 03:25:27.719409 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 03:25:27.720455 systemd-logind[1475]: Removed session 27. Oct 9 03:25:32.893060 systemd[1]: Started sshd@27-188.245.48.63:22-139.178.68.195:36654.service - OpenSSH per-connection server daemon (139.178.68.195:36654). Oct 9 03:25:33.902017 sshd[5979]: Accepted publickey for core from 139.178.68.195 port 36654 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:33.904016 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:33.908218 systemd-logind[1475]: New session 28 of user core. Oct 9 03:25:33.913628 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 03:25:34.677267 sshd[5979]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:34.682477 systemd[1]: sshd@27-188.245.48.63:22-139.178.68.195:36654.service: Deactivated successfully. Oct 9 03:25:34.684729 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 03:25:34.685766 systemd-logind[1475]: Session 28 logged out. Waiting for processes to exit. Oct 9 03:25:34.687655 systemd-logind[1475]: Removed session 28. Oct 9 03:25:39.847472 systemd[1]: Started sshd@28-188.245.48.63:22-139.178.68.195:36660.service - OpenSSH per-connection server daemon (139.178.68.195:36660). Oct 9 03:25:40.838958 sshd[6012]: Accepted publickey for core from 139.178.68.195 port 36660 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:40.840490 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:40.844177 systemd-logind[1475]: New session 29 of user core. Oct 9 03:25:40.849571 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 9 03:25:41.611636 sshd[6012]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:41.614714 systemd[1]: sshd@28-188.245.48.63:22-139.178.68.195:36660.service: Deactivated successfully. Oct 9 03:25:41.616832 systemd[1]: session-29.scope: Deactivated successfully. Oct 9 03:25:41.618349 systemd-logind[1475]: Session 29 logged out. Waiting for processes to exit. Oct 9 03:25:41.619491 systemd-logind[1475]: Removed session 29. Oct 9 03:25:46.788686 systemd[1]: Started sshd@29-188.245.48.63:22-139.178.68.195:44166.service - OpenSSH per-connection server daemon (139.178.68.195:44166). Oct 9 03:25:47.795296 sshd[6052]: Accepted publickey for core from 139.178.68.195 port 44166 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:47.797224 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:47.802765 systemd-logind[1475]: New session 30 of user core. Oct 9 03:25:47.808632 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 9 03:25:48.563844 sshd[6052]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:48.566427 systemd[1]: sshd@29-188.245.48.63:22-139.178.68.195:44166.service: Deactivated successfully. Oct 9 03:25:48.568855 systemd[1]: session-30.scope: Deactivated successfully. Oct 9 03:25:48.570543 systemd-logind[1475]: Session 30 logged out. Waiting for processes to exit. Oct 9 03:25:48.572356 systemd-logind[1475]: Removed session 30. Oct 9 03:25:53.746952 systemd[1]: Started sshd@30-188.245.48.63:22-139.178.68.195:33576.service - OpenSSH per-connection server daemon (139.178.68.195:33576). Oct 9 03:25:54.735384 sshd[6071]: Accepted publickey for core from 139.178.68.195 port 33576 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:25:54.736985 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:25:54.741312 systemd-logind[1475]: New session 31 of user core. Oct 9 03:25:54.745565 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 9 03:25:55.483354 sshd[6071]: pam_unix(sshd:session): session closed for user core Oct 9 03:25:55.488871 systemd-logind[1475]: Session 31 logged out. Waiting for processes to exit. Oct 9 03:25:55.489562 systemd[1]: sshd@30-188.245.48.63:22-139.178.68.195:33576.service: Deactivated successfully. Oct 9 03:25:55.492238 systemd[1]: session-31.scope: Deactivated successfully. Oct 9 03:25:55.493598 systemd-logind[1475]: Removed session 31. Oct 9 03:26:00.658130 systemd[1]: Started sshd@31-188.245.48.63:22-139.178.68.195:33584.service - OpenSSH per-connection server daemon (139.178.68.195:33584). Oct 9 03:26:01.665541 sshd[6097]: Accepted publickey for core from 139.178.68.195 port 33584 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:01.668225 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:01.673349 systemd-logind[1475]: New session 32 of user core. Oct 9 03:26:01.678620 systemd[1]: Started session-32.scope - Session 32 of User core. Oct 9 03:26:02.463130 sshd[6097]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:02.466004 systemd[1]: sshd@31-188.245.48.63:22-139.178.68.195:33584.service: Deactivated successfully. Oct 9 03:26:02.468422 systemd[1]: session-32.scope: Deactivated successfully. Oct 9 03:26:02.470007 systemd-logind[1475]: Session 32 logged out. Waiting for processes to exit. Oct 9 03:26:02.471941 systemd-logind[1475]: Removed session 32. Oct 9 03:26:03.495516 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.8sooOS.mount: Deactivated successfully. Oct 9 03:26:07.636293 systemd[1]: Started sshd@32-188.245.48.63:22-139.178.68.195:47176.service - OpenSSH per-connection server daemon (139.178.68.195:47176). Oct 9 03:26:08.635326 sshd[6159]: Accepted publickey for core from 139.178.68.195 port 47176 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:08.637010 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:08.642148 systemd-logind[1475]: New session 33 of user core. Oct 9 03:26:08.648597 systemd[1]: Started session-33.scope - Session 33 of User core. Oct 9 03:26:09.386258 sshd[6159]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:09.390412 systemd[1]: sshd@32-188.245.48.63:22-139.178.68.195:47176.service: Deactivated successfully. Oct 9 03:26:09.392730 systemd[1]: session-33.scope: Deactivated successfully. Oct 9 03:26:09.393907 systemd-logind[1475]: Session 33 logged out. Waiting for processes to exit. Oct 9 03:26:09.395388 systemd-logind[1475]: Removed session 33. Oct 9 03:26:14.566698 systemd[1]: Started sshd@33-188.245.48.63:22-139.178.68.195:52782.service - OpenSSH per-connection server daemon (139.178.68.195:52782). Oct 9 03:26:15.559711 sshd[6178]: Accepted publickey for core from 139.178.68.195 port 52782 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:15.561259 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:15.565316 systemd-logind[1475]: New session 34 of user core. Oct 9 03:26:15.571567 systemd[1]: Started session-34.scope - Session 34 of User core. Oct 9 03:26:16.316223 sshd[6178]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:16.320227 systemd-logind[1475]: Session 34 logged out. Waiting for processes to exit. Oct 9 03:26:16.321166 systemd[1]: sshd@33-188.245.48.63:22-139.178.68.195:52782.service: Deactivated successfully. Oct 9 03:26:16.323360 systemd[1]: session-34.scope: Deactivated successfully. Oct 9 03:26:16.324384 systemd-logind[1475]: Removed session 34. Oct 9 03:26:21.487573 systemd[1]: Started sshd@34-188.245.48.63:22-139.178.68.195:39158.service - OpenSSH per-connection server daemon (139.178.68.195:39158). Oct 9 03:26:22.481460 sshd[6194]: Accepted publickey for core from 139.178.68.195 port 39158 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:22.483035 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:22.486928 systemd-logind[1475]: New session 35 of user core. Oct 9 03:26:22.491566 systemd[1]: Started session-35.scope - Session 35 of User core. Oct 9 03:26:23.254397 sshd[6194]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:23.260860 systemd[1]: sshd@34-188.245.48.63:22-139.178.68.195:39158.service: Deactivated successfully. Oct 9 03:26:23.263182 systemd[1]: session-35.scope: Deactivated successfully. Oct 9 03:26:23.265044 systemd-logind[1475]: Session 35 logged out. Waiting for processes to exit. Oct 9 03:26:23.266422 systemd-logind[1475]: Removed session 35. Oct 9 03:26:28.428666 systemd[1]: Started sshd@35-188.245.48.63:22-139.178.68.195:39168.service - OpenSSH per-connection server daemon (139.178.68.195:39168). Oct 9 03:26:29.433312 sshd[6213]: Accepted publickey for core from 139.178.68.195 port 39168 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:29.435071 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:29.440036 systemd-logind[1475]: New session 36 of user core. Oct 9 03:26:29.444612 systemd[1]: Started session-36.scope - Session 36 of User core. Oct 9 03:26:30.195466 sshd[6213]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:30.199193 systemd[1]: sshd@35-188.245.48.63:22-139.178.68.195:39168.service: Deactivated successfully. Oct 9 03:26:30.201984 systemd[1]: session-36.scope: Deactivated successfully. Oct 9 03:26:30.204168 systemd-logind[1475]: Session 36 logged out. Waiting for processes to exit. Oct 9 03:26:30.205362 systemd-logind[1475]: Removed session 36. Oct 9 03:26:35.375738 systemd[1]: Started sshd@36-188.245.48.63:22-139.178.68.195:47708.service - OpenSSH per-connection server daemon (139.178.68.195:47708). Oct 9 03:26:36.397357 sshd[6281]: Accepted publickey for core from 139.178.68.195 port 47708 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:36.399383 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:36.405228 systemd-logind[1475]: New session 37 of user core. Oct 9 03:26:36.413594 systemd[1]: Started session-37.scope - Session 37 of User core. Oct 9 03:26:37.170903 sshd[6281]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:37.176136 systemd-logind[1475]: Session 37 logged out. Waiting for processes to exit. Oct 9 03:26:37.176642 systemd[1]: sshd@36-188.245.48.63:22-139.178.68.195:47708.service: Deactivated successfully. Oct 9 03:26:37.179117 systemd[1]: session-37.scope: Deactivated successfully. Oct 9 03:26:37.180299 systemd-logind[1475]: Removed session 37. Oct 9 03:26:42.347017 systemd[1]: Started sshd@37-188.245.48.63:22-139.178.68.195:49636.service - OpenSSH per-connection server daemon (139.178.68.195:49636). Oct 9 03:26:43.357073 sshd[6295]: Accepted publickey for core from 139.178.68.195 port 49636 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:43.358638 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:43.363179 systemd-logind[1475]: New session 38 of user core. Oct 9 03:26:43.372554 systemd[1]: Started session-38.scope - Session 38 of User core. Oct 9 03:26:44.115816 sshd[6295]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:44.120541 systemd[1]: sshd@37-188.245.48.63:22-139.178.68.195:49636.service: Deactivated successfully. Oct 9 03:26:44.122600 systemd[1]: session-38.scope: Deactivated successfully. Oct 9 03:26:44.123307 systemd-logind[1475]: Session 38 logged out. Waiting for processes to exit. Oct 9 03:26:44.124380 systemd-logind[1475]: Removed session 38. Oct 9 03:26:49.293703 systemd[1]: Started sshd@38-188.245.48.63:22-139.178.68.195:49646.service - OpenSSH per-connection server daemon (139.178.68.195:49646). Oct 9 03:26:50.286976 sshd[6335]: Accepted publickey for core from 139.178.68.195 port 49646 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:50.288884 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:50.293423 systemd-logind[1475]: New session 39 of user core. Oct 9 03:26:50.297558 systemd[1]: Started session-39.scope - Session 39 of User core. Oct 9 03:26:51.045679 sshd[6335]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:51.048733 systemd[1]: sshd@38-188.245.48.63:22-139.178.68.195:49646.service: Deactivated successfully. Oct 9 03:26:51.051336 systemd[1]: session-39.scope: Deactivated successfully. Oct 9 03:26:51.053295 systemd-logind[1475]: Session 39 logged out. Waiting for processes to exit. Oct 9 03:26:51.055236 systemd-logind[1475]: Removed session 39. Oct 9 03:26:56.218672 systemd[1]: Started sshd@39-188.245.48.63:22-139.178.68.195:46134.service - OpenSSH per-connection server daemon (139.178.68.195:46134). Oct 9 03:26:57.216224 sshd[6356]: Accepted publickey for core from 139.178.68.195 port 46134 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:26:57.218122 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:26:57.223160 systemd-logind[1475]: New session 40 of user core. Oct 9 03:26:57.227555 systemd[1]: Started session-40.scope - Session 40 of User core. Oct 9 03:26:57.967030 sshd[6356]: pam_unix(sshd:session): session closed for user core Oct 9 03:26:57.971344 systemd[1]: sshd@39-188.245.48.63:22-139.178.68.195:46134.service: Deactivated successfully. Oct 9 03:26:57.974123 systemd[1]: session-40.scope: Deactivated successfully. Oct 9 03:26:57.974857 systemd-logind[1475]: Session 40 logged out. Waiting for processes to exit. Oct 9 03:26:57.976567 systemd-logind[1475]: Removed session 40. Oct 9 03:27:03.138382 systemd[1]: Started sshd@40-188.245.48.63:22-139.178.68.195:49704.service - OpenSSH per-connection server daemon (139.178.68.195:49704). Oct 9 03:27:04.132792 sshd[6392]: Accepted publickey for core from 139.178.68.195 port 49704 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:04.135083 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:04.139816 systemd-logind[1475]: New session 41 of user core. Oct 9 03:27:04.146669 systemd[1]: Started session-41.scope - Session 41 of User core. Oct 9 03:27:04.886129 sshd[6392]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:04.890503 systemd-logind[1475]: Session 41 logged out. Waiting for processes to exit. Oct 9 03:27:04.891369 systemd[1]: sshd@40-188.245.48.63:22-139.178.68.195:49704.service: Deactivated successfully. Oct 9 03:27:04.893968 systemd[1]: session-41.scope: Deactivated successfully. Oct 9 03:27:04.894986 systemd-logind[1475]: Removed session 41. Oct 9 03:27:10.079675 systemd[1]: Started sshd@41-188.245.48.63:22-139.178.68.195:49720.service - OpenSSH per-connection server daemon (139.178.68.195:49720). Oct 9 03:27:11.113934 sshd[6430]: Accepted publickey for core from 139.178.68.195 port 49720 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:11.115592 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:11.120528 systemd-logind[1475]: New session 42 of user core. Oct 9 03:27:11.127544 systemd[1]: Started session-42.scope - Session 42 of User core. Oct 9 03:27:11.887054 sshd[6430]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:11.891582 systemd-logind[1475]: Session 42 logged out. Waiting for processes to exit. Oct 9 03:27:11.892112 systemd[1]: sshd@41-188.245.48.63:22-139.178.68.195:49720.service: Deactivated successfully. Oct 9 03:27:11.895090 systemd[1]: session-42.scope: Deactivated successfully. Oct 9 03:27:11.896209 systemd-logind[1475]: Removed session 42. Oct 9 03:27:17.078663 systemd[1]: Started sshd@42-188.245.48.63:22-139.178.68.195:55354.service - OpenSSH per-connection server daemon (139.178.68.195:55354). Oct 9 03:27:18.174249 sshd[6452]: Accepted publickey for core from 139.178.68.195 port 55354 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:18.176019 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:18.180885 systemd-logind[1475]: New session 43 of user core. Oct 9 03:27:18.187615 systemd[1]: Started session-43.scope - Session 43 of User core. Oct 9 03:27:19.001525 sshd[6452]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:19.006211 systemd[1]: sshd@42-188.245.48.63:22-139.178.68.195:55354.service: Deactivated successfully. Oct 9 03:27:19.008226 systemd[1]: session-43.scope: Deactivated successfully. Oct 9 03:27:19.008967 systemd-logind[1475]: Session 43 logged out. Waiting for processes to exit. Oct 9 03:27:19.010489 systemd-logind[1475]: Removed session 43. Oct 9 03:27:19.173691 systemd[1]: Started sshd@43-188.245.48.63:22-139.178.68.195:55356.service - OpenSSH per-connection server daemon (139.178.68.195:55356). Oct 9 03:27:20.171374 sshd[6467]: Accepted publickey for core from 139.178.68.195 port 55356 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:20.173084 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:20.177419 systemd-logind[1475]: New session 44 of user core. Oct 9 03:27:20.184584 systemd[1]: Started session-44.scope - Session 44 of User core. Oct 9 03:27:20.993610 sshd[6467]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:20.999465 systemd[1]: sshd@43-188.245.48.63:22-139.178.68.195:55356.service: Deactivated successfully. Oct 9 03:27:21.001829 systemd[1]: session-44.scope: Deactivated successfully. Oct 9 03:27:21.002648 systemd-logind[1475]: Session 44 logged out. Waiting for processes to exit. Oct 9 03:27:21.003783 systemd-logind[1475]: Removed session 44. Oct 9 03:27:21.171933 systemd[1]: Started sshd@44-188.245.48.63:22-139.178.68.195:58322.service - OpenSSH per-connection server daemon (139.178.68.195:58322). Oct 9 03:27:22.167928 sshd[6478]: Accepted publickey for core from 139.178.68.195 port 58322 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:22.170172 sshd[6478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:22.175576 systemd-logind[1475]: New session 45 of user core. Oct 9 03:27:22.182578 systemd[1]: Started session-45.scope - Session 45 of User core. Oct 9 03:27:22.936847 sshd[6478]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:22.945612 systemd[1]: sshd@44-188.245.48.63:22-139.178.68.195:58322.service: Deactivated successfully. Oct 9 03:27:22.948350 systemd[1]: session-45.scope: Deactivated successfully. Oct 9 03:27:22.949624 systemd-logind[1475]: Session 45 logged out. Waiting for processes to exit. Oct 9 03:27:22.950909 systemd-logind[1475]: Removed session 45. Oct 9 03:27:28.108670 systemd[1]: Started sshd@45-188.245.48.63:22-139.178.68.195:58334.service - OpenSSH per-connection server daemon (139.178.68.195:58334). Oct 9 03:27:29.100816 sshd[6497]: Accepted publickey for core from 139.178.68.195 port 58334 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:29.103218 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:29.108558 systemd-logind[1475]: New session 46 of user core. Oct 9 03:27:29.113617 systemd[1]: Started session-46.scope - Session 46 of User core. Oct 9 03:27:29.866236 sshd[6497]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:29.869039 systemd[1]: sshd@45-188.245.48.63:22-139.178.68.195:58334.service: Deactivated successfully. Oct 9 03:27:29.871865 systemd[1]: session-46.scope: Deactivated successfully. Oct 9 03:27:29.873982 systemd-logind[1475]: Session 46 logged out. Waiting for processes to exit. Oct 9 03:27:29.875287 systemd-logind[1475]: Removed session 46. Oct 9 03:27:35.042734 systemd[1]: Started sshd@46-188.245.48.63:22-139.178.68.195:57088.service - OpenSSH per-connection server daemon (139.178.68.195:57088). Oct 9 03:27:36.048478 sshd[6567]: Accepted publickey for core from 139.178.68.195 port 57088 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:36.050867 sshd[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:36.055320 systemd-logind[1475]: New session 47 of user core. Oct 9 03:27:36.058555 systemd[1]: Started session-47.scope - Session 47 of User core. Oct 9 03:27:36.818866 sshd[6567]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:36.823124 systemd[1]: sshd@46-188.245.48.63:22-139.178.68.195:57088.service: Deactivated successfully. Oct 9 03:27:36.825992 systemd[1]: session-47.scope: Deactivated successfully. Oct 9 03:27:36.826716 systemd-logind[1475]: Session 47 logged out. Waiting for processes to exit. Oct 9 03:27:36.827715 systemd-logind[1475]: Removed session 47. Oct 9 03:27:41.997986 systemd[1]: Started sshd@47-188.245.48.63:22-139.178.68.195:36322.service - OpenSSH per-connection server daemon (139.178.68.195:36322). Oct 9 03:27:43.016174 sshd[6588]: Accepted publickey for core from 139.178.68.195 port 36322 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:43.017801 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:43.022530 systemd-logind[1475]: New session 48 of user core. Oct 9 03:27:43.029602 systemd[1]: Started session-48.scope - Session 48 of User core. Oct 9 03:27:43.793145 sshd[6588]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:43.798339 systemd-logind[1475]: Session 48 logged out. Waiting for processes to exit. Oct 9 03:27:43.798998 systemd[1]: sshd@47-188.245.48.63:22-139.178.68.195:36322.service: Deactivated successfully. Oct 9 03:27:43.800970 systemd[1]: session-48.scope: Deactivated successfully. Oct 9 03:27:43.802387 systemd-logind[1475]: Removed session 48. Oct 9 03:27:45.708654 update_engine[1477]: I20241009 03:27:45.708586 1477 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 9 03:27:45.708654 update_engine[1477]: I20241009 03:27:45.708647 1477 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 9 03:27:45.709942 update_engine[1477]: I20241009 03:27:45.709910 1477 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 9 03:27:45.710695 update_engine[1477]: I20241009 03:27:45.710667 1477 omaha_request_params.cc:62] Current group set to alpha Oct 9 03:27:45.710919 update_engine[1477]: I20241009 03:27:45.710800 1477 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 9 03:27:45.710919 update_engine[1477]: I20241009 03:27:45.710816 1477 update_attempter.cc:643] Scheduling an action processor start. Oct 9 03:27:45.710919 update_engine[1477]: I20241009 03:27:45.710834 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 03:27:45.710919 update_engine[1477]: I20241009 03:27:45.710873 1477 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 9 03:27:45.711015 update_engine[1477]: I20241009 03:27:45.710937 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 03:27:45.711015 update_engine[1477]: I20241009 03:27:45.710948 1477 omaha_request_action.cc:272] Request: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: Oct 9 03:27:45.711015 update_engine[1477]: I20241009 03:27:45.710956 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 03:27:45.725089 update_engine[1477]: I20241009 03:27:45.724810 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 03:27:45.725215 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 9 03:27:45.725745 update_engine[1477]: I20241009 03:27:45.725106 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 03:27:45.726736 update_engine[1477]: E20241009 03:27:45.725881 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 03:27:45.726736 update_engine[1477]: I20241009 03:27:45.725960 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 9 03:27:48.967977 systemd[1]: Started sshd@48-188.245.48.63:22-139.178.68.195:36332.service - OpenSSH per-connection server daemon (139.178.68.195:36332). Oct 9 03:27:49.978833 sshd[6629]: Accepted publickey for core from 139.178.68.195 port 36332 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:49.981525 sshd[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:49.985809 systemd-logind[1475]: New session 49 of user core. Oct 9 03:27:49.993553 systemd[1]: Started session-49.scope - Session 49 of User core. Oct 9 03:27:50.819877 sshd[6629]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:50.825281 systemd-logind[1475]: Session 49 logged out. Waiting for processes to exit. Oct 9 03:27:50.826018 systemd[1]: sshd@48-188.245.48.63:22-139.178.68.195:36332.service: Deactivated successfully. Oct 9 03:27:50.827880 systemd[1]: session-49.scope: Deactivated successfully. Oct 9 03:27:50.828669 systemd-logind[1475]: Removed session 49. Oct 9 03:27:55.682052 update_engine[1477]: I20241009 03:27:55.681977 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 03:27:55.682478 update_engine[1477]: I20241009 03:27:55.682200 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 03:27:55.682478 update_engine[1477]: I20241009 03:27:55.682386 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 03:27:55.682998 update_engine[1477]: E20241009 03:27:55.682966 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 03:27:55.683047 update_engine[1477]: I20241009 03:27:55.683013 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 9 03:27:55.989403 systemd[1]: Started sshd@49-188.245.48.63:22-139.178.68.195:46170.service - OpenSSH per-connection server daemon (139.178.68.195:46170). Oct 9 03:27:56.990606 sshd[6643]: Accepted publickey for core from 139.178.68.195 port 46170 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:27:56.992598 sshd[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:27:56.998591 systemd-logind[1475]: New session 50 of user core. Oct 9 03:27:57.003613 systemd[1]: Started session-50.scope - Session 50 of User core. Oct 9 03:27:57.749108 sshd[6643]: pam_unix(sshd:session): session closed for user core Oct 9 03:27:57.753551 systemd[1]: sshd@49-188.245.48.63:22-139.178.68.195:46170.service: Deactivated successfully. Oct 9 03:27:57.756280 systemd[1]: session-50.scope: Deactivated successfully. Oct 9 03:27:57.757483 systemd-logind[1475]: Session 50 logged out. Waiting for processes to exit. Oct 9 03:27:57.759854 systemd-logind[1475]: Removed session 50. Oct 9 03:28:02.927767 systemd[1]: Started sshd@50-188.245.48.63:22-139.178.68.195:58634.service - OpenSSH per-connection server daemon (139.178.68.195:58634). Oct 9 03:28:03.949606 sshd[6683]: Accepted publickey for core from 139.178.68.195 port 58634 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:03.952318 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:03.957801 systemd-logind[1475]: New session 51 of user core. Oct 9 03:28:03.962558 systemd[1]: Started session-51.scope - Session 51 of User core. Oct 9 03:28:04.800652 sshd[6683]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:04.805410 systemd[1]: sshd@50-188.245.48.63:22-139.178.68.195:58634.service: Deactivated successfully. Oct 9 03:28:04.807878 systemd[1]: session-51.scope: Deactivated successfully. Oct 9 03:28:04.808943 systemd-logind[1475]: Session 51 logged out. Waiting for processes to exit. Oct 9 03:28:04.810546 systemd-logind[1475]: Removed session 51. Oct 9 03:28:05.678495 update_engine[1477]: I20241009 03:28:05.678388 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 03:28:05.678898 update_engine[1477]: I20241009 03:28:05.678771 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 03:28:05.679074 update_engine[1477]: I20241009 03:28:05.679037 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 03:28:05.679733 update_engine[1477]: E20241009 03:28:05.679688 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 03:28:05.679793 update_engine[1477]: I20241009 03:28:05.679752 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 9 03:28:09.982877 systemd[1]: Started sshd@51-188.245.48.63:22-139.178.68.195:58642.service - OpenSSH per-connection server daemon (139.178.68.195:58642). Oct 9 03:28:10.999515 sshd[6721]: Accepted publickey for core from 139.178.68.195 port 58642 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:11.001940 sshd[6721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:11.006852 systemd-logind[1475]: New session 52 of user core. Oct 9 03:28:11.011572 systemd[1]: Started session-52.scope - Session 52 of User core. Oct 9 03:28:11.804255 sshd[6721]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:11.808391 systemd[1]: sshd@51-188.245.48.63:22-139.178.68.195:58642.service: Deactivated successfully. Oct 9 03:28:11.810657 systemd[1]: session-52.scope: Deactivated successfully. Oct 9 03:28:11.811860 systemd-logind[1475]: Session 52 logged out. Waiting for processes to exit. Oct 9 03:28:11.813013 systemd-logind[1475]: Removed session 52. Oct 9 03:28:15.678120 update_engine[1477]: I20241009 03:28:15.678049 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 03:28:15.678606 update_engine[1477]: I20241009 03:28:15.678277 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 03:28:15.678606 update_engine[1477]: I20241009 03:28:15.678496 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 03:28:15.679186 update_engine[1477]: E20241009 03:28:15.679157 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 03:28:15.679246 update_engine[1477]: I20241009 03:28:15.679203 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 03:28:15.679246 update_engine[1477]: I20241009 03:28:15.679212 1477 omaha_request_action.cc:617] Omaha request response: Oct 9 03:28:15.679313 update_engine[1477]: E20241009 03:28:15.679286 1477 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 9 03:28:15.679341 update_engine[1477]: I20241009 03:28:15.679312 1477 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 9 03:28:15.679341 update_engine[1477]: I20241009 03:28:15.679319 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 03:28:15.679341 update_engine[1477]: I20241009 03:28:15.679324 1477 update_attempter.cc:306] Processing Done. Oct 9 03:28:15.679537 update_engine[1477]: E20241009 03:28:15.679340 1477 update_attempter.cc:619] Update failed. Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680500 1477 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680518 1477 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680526 1477 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680591 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680611 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680617 1477 omaha_request_action.cc:272] Request: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680624 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680748 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 03:28:15.681510 update_engine[1477]: I20241009 03:28:15.680902 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 03:28:15.681916 update_engine[1477]: E20241009 03:28:15.681824 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681860 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681869 1477 omaha_request_action.cc:617] Omaha request response: Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681875 1477 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681882 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681890 1477 update_attempter.cc:306] Processing Done. Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681895 1477 update_attempter.cc:310] Error event sent. Oct 9 03:28:15.681916 update_engine[1477]: I20241009 03:28:15.681905 1477 update_check_scheduler.cc:74] Next update check in 42m1s Oct 9 03:28:15.682090 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 9 03:28:15.682338 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 9 03:28:16.986719 systemd[1]: Started sshd@52-188.245.48.63:22-139.178.68.195:35344.service - OpenSSH per-connection server daemon (139.178.68.195:35344). Oct 9 03:28:18.037584 sshd[6736]: Accepted publickey for core from 139.178.68.195 port 35344 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:18.039383 sshd[6736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:18.044659 systemd-logind[1475]: New session 53 of user core. Oct 9 03:28:18.049565 systemd[1]: Started session-53.scope - Session 53 of User core. Oct 9 03:28:18.834519 sshd[6736]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:18.837627 systemd[1]: sshd@52-188.245.48.63:22-139.178.68.195:35344.service: Deactivated successfully. Oct 9 03:28:18.840045 systemd[1]: session-53.scope: Deactivated successfully. Oct 9 03:28:18.841372 systemd-logind[1475]: Session 53 logged out. Waiting for processes to exit. Oct 9 03:28:18.842607 systemd-logind[1475]: Removed session 53. Oct 9 03:28:24.013497 systemd[1]: Started sshd@53-188.245.48.63:22-139.178.68.195:32916.service - OpenSSH per-connection server daemon (139.178.68.195:32916). Oct 9 03:28:25.043860 sshd[6754]: Accepted publickey for core from 139.178.68.195 port 32916 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:25.045700 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:25.051561 systemd-logind[1475]: New session 54 of user core. Oct 9 03:28:25.057551 systemd[1]: Started session-54.scope - Session 54 of User core. Oct 9 03:28:25.800531 sshd[6754]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:25.804685 systemd[1]: sshd@53-188.245.48.63:22-139.178.68.195:32916.service: Deactivated successfully. Oct 9 03:28:25.806953 systemd[1]: session-54.scope: Deactivated successfully. Oct 9 03:28:25.807714 systemd-logind[1475]: Session 54 logged out. Waiting for processes to exit. Oct 9 03:28:25.808834 systemd-logind[1475]: Removed session 54. Oct 9 03:28:30.979377 systemd[1]: Started sshd@54-188.245.48.63:22-139.178.68.195:58738.service - OpenSSH per-connection server daemon (139.178.68.195:58738). Oct 9 03:28:31.983737 sshd[6794]: Accepted publickey for core from 139.178.68.195 port 58738 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:31.987829 sshd[6794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:31.992852 systemd-logind[1475]: New session 55 of user core. Oct 9 03:28:31.998607 systemd[1]: Started session-55.scope - Session 55 of User core. Oct 9 03:28:32.779498 sshd[6794]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:32.784087 systemd[1]: sshd@54-188.245.48.63:22-139.178.68.195:58738.service: Deactivated successfully. Oct 9 03:28:32.786417 systemd[1]: session-55.scope: Deactivated successfully. Oct 9 03:28:32.787582 systemd-logind[1475]: Session 55 logged out. Waiting for processes to exit. Oct 9 03:28:32.788826 systemd-logind[1475]: Removed session 55. Oct 9 03:28:37.956551 systemd[1]: Started sshd@55-188.245.48.63:22-139.178.68.195:58740.service - OpenSSH per-connection server daemon (139.178.68.195:58740). Oct 9 03:28:38.980201 sshd[6827]: Accepted publickey for core from 139.178.68.195 port 58740 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:38.981705 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:38.986012 systemd-logind[1475]: New session 56 of user core. Oct 9 03:28:38.991575 systemd[1]: Started session-56.scope - Session 56 of User core. Oct 9 03:28:39.764939 sshd[6827]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:39.767881 systemd[1]: sshd@55-188.245.48.63:22-139.178.68.195:58740.service: Deactivated successfully. Oct 9 03:28:39.769811 systemd[1]: session-56.scope: Deactivated successfully. Oct 9 03:28:39.771251 systemd-logind[1475]: Session 56 logged out. Waiting for processes to exit. Oct 9 03:28:39.772807 systemd-logind[1475]: Removed session 56. Oct 9 03:28:44.948880 systemd[1]: Started sshd@56-188.245.48.63:22-139.178.68.195:47610.service - OpenSSH per-connection server daemon (139.178.68.195:47610). Oct 9 03:28:45.957603 sshd[6865]: Accepted publickey for core from 139.178.68.195 port 47610 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:45.960325 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:45.969330 systemd-logind[1475]: New session 57 of user core. Oct 9 03:28:45.976672 systemd[1]: Started session-57.scope - Session 57 of User core. Oct 9 03:28:46.711410 sshd[6865]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:46.715319 systemd-logind[1475]: Session 57 logged out. Waiting for processes to exit. Oct 9 03:28:46.716090 systemd[1]: sshd@56-188.245.48.63:22-139.178.68.195:47610.service: Deactivated successfully. Oct 9 03:28:46.718081 systemd[1]: session-57.scope: Deactivated successfully. Oct 9 03:28:46.718952 systemd-logind[1475]: Removed session 57. Oct 9 03:28:51.889814 systemd[1]: Started sshd@57-188.245.48.63:22-139.178.68.195:53374.service - OpenSSH per-connection server daemon (139.178.68.195:53374). Oct 9 03:28:52.883325 sshd[6885]: Accepted publickey for core from 139.178.68.195 port 53374 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:52.885257 sshd[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:52.889805 systemd-logind[1475]: New session 58 of user core. Oct 9 03:28:52.895612 systemd[1]: Started session-58.scope - Session 58 of User core. Oct 9 03:28:53.635948 sshd[6885]: pam_unix(sshd:session): session closed for user core Oct 9 03:28:53.639868 systemd[1]: sshd@57-188.245.48.63:22-139.178.68.195:53374.service: Deactivated successfully. Oct 9 03:28:53.641740 systemd[1]: session-58.scope: Deactivated successfully. Oct 9 03:28:53.642344 systemd-logind[1475]: Session 58 logged out. Waiting for processes to exit. Oct 9 03:28:53.643247 systemd-logind[1475]: Removed session 58. Oct 9 03:28:58.812757 systemd[1]: Started sshd@58-188.245.48.63:22-139.178.68.195:53376.service - OpenSSH per-connection server daemon (139.178.68.195:53376). Oct 9 03:28:59.808209 sshd[6898]: Accepted publickey for core from 139.178.68.195 port 53376 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:28:59.810286 sshd[6898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:28:59.815601 systemd-logind[1475]: New session 59 of user core. Oct 9 03:28:59.821578 systemd[1]: Started session-59.scope - Session 59 of User core. Oct 9 03:29:00.556796 sshd[6898]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:00.561545 systemd[1]: sshd@58-188.245.48.63:22-139.178.68.195:53376.service: Deactivated successfully. Oct 9 03:29:00.564090 systemd[1]: session-59.scope: Deactivated successfully. Oct 9 03:29:00.565077 systemd-logind[1475]: Session 59 logged out. Waiting for processes to exit. Oct 9 03:29:00.566338 systemd-logind[1475]: Removed session 59. Oct 9 03:29:05.728367 systemd[1]: Started sshd@59-188.245.48.63:22-139.178.68.195:39740.service - OpenSSH per-connection server daemon (139.178.68.195:39740). Oct 9 03:29:06.725111 sshd[6971]: Accepted publickey for core from 139.178.68.195 port 39740 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:06.727879 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:06.736355 systemd-logind[1475]: New session 60 of user core. Oct 9 03:29:06.741675 systemd[1]: Started session-60.scope - Session 60 of User core. Oct 9 03:29:07.493078 sshd[6971]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:07.497749 systemd[1]: sshd@59-188.245.48.63:22-139.178.68.195:39740.service: Deactivated successfully. Oct 9 03:29:07.501589 systemd[1]: session-60.scope: Deactivated successfully. Oct 9 03:29:07.503850 systemd-logind[1475]: Session 60 logged out. Waiting for processes to exit. Oct 9 03:29:07.505324 systemd-logind[1475]: Removed session 60. Oct 9 03:29:12.674635 systemd[1]: Started sshd@60-188.245.48.63:22-139.178.68.195:44284.service - OpenSSH per-connection server daemon (139.178.68.195:44284). Oct 9 03:29:13.715012 sshd[6989]: Accepted publickey for core from 139.178.68.195 port 44284 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:13.716719 sshd[6989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:13.721296 systemd-logind[1475]: New session 61 of user core. Oct 9 03:29:13.725596 systemd[1]: Started session-61.scope - Session 61 of User core. Oct 9 03:29:14.488886 sshd[6989]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:14.492804 systemd-logind[1475]: Session 61 logged out. Waiting for processes to exit. Oct 9 03:29:14.493719 systemd[1]: sshd@60-188.245.48.63:22-139.178.68.195:44284.service: Deactivated successfully. Oct 9 03:29:14.496854 systemd[1]: session-61.scope: Deactivated successfully. Oct 9 03:29:14.498617 systemd-logind[1475]: Removed session 61. Oct 9 03:29:19.660223 systemd[1]: Started sshd@61-188.245.48.63:22-139.178.68.195:44300.service - OpenSSH per-connection server daemon (139.178.68.195:44300). Oct 9 03:29:20.665933 sshd[7004]: Accepted publickey for core from 139.178.68.195 port 44300 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:20.667896 sshd[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:20.673915 systemd-logind[1475]: New session 62 of user core. Oct 9 03:29:20.678555 systemd[1]: Started session-62.scope - Session 62 of User core. Oct 9 03:29:21.414722 sshd[7004]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:21.417780 systemd[1]: sshd@61-188.245.48.63:22-139.178.68.195:44300.service: Deactivated successfully. Oct 9 03:29:21.420130 systemd[1]: session-62.scope: Deactivated successfully. Oct 9 03:29:21.422233 systemd-logind[1475]: Session 62 logged out. Waiting for processes to exit. Oct 9 03:29:21.423713 systemd-logind[1475]: Removed session 62. Oct 9 03:29:26.591691 systemd[1]: Started sshd@62-188.245.48.63:22-139.178.68.195:33542.service - OpenSSH per-connection server daemon (139.178.68.195:33542). Oct 9 03:29:27.579387 sshd[7022]: Accepted publickey for core from 139.178.68.195 port 33542 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:27.581112 sshd[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:27.585491 systemd-logind[1475]: New session 63 of user core. Oct 9 03:29:27.590589 systemd[1]: Started session-63.scope - Session 63 of User core. Oct 9 03:29:28.326871 sshd[7022]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:28.330782 systemd[1]: sshd@62-188.245.48.63:22-139.178.68.195:33542.service: Deactivated successfully. Oct 9 03:29:28.332975 systemd[1]: session-63.scope: Deactivated successfully. Oct 9 03:29:28.333915 systemd-logind[1475]: Session 63 logged out. Waiting for processes to exit. Oct 9 03:29:28.334940 systemd-logind[1475]: Removed session 63. Oct 9 03:29:30.709379 systemd[1]: run-containerd-runc-k8s.io-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f-runc.qJ8A4N.mount: Deactivated successfully. Oct 9 03:29:33.505759 systemd[1]: Started sshd@63-188.245.48.63:22-139.178.68.195:49038.service - OpenSSH per-connection server daemon (139.178.68.195:49038). Oct 9 03:29:34.498031 sshd[7088]: Accepted publickey for core from 139.178.68.195 port 49038 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:34.500625 sshd[7088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:34.505538 systemd-logind[1475]: New session 64 of user core. Oct 9 03:29:34.509585 systemd[1]: Started session-64.scope - Session 64 of User core. Oct 9 03:29:35.252070 sshd[7088]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:35.256461 systemd[1]: sshd@63-188.245.48.63:22-139.178.68.195:49038.service: Deactivated successfully. Oct 9 03:29:35.259055 systemd[1]: session-64.scope: Deactivated successfully. Oct 9 03:29:35.260337 systemd-logind[1475]: Session 64 logged out. Waiting for processes to exit. Oct 9 03:29:35.261662 systemd-logind[1475]: Removed session 64. Oct 9 03:29:40.431743 systemd[1]: Started sshd@64-188.245.48.63:22-139.178.68.195:49052.service - OpenSSH per-connection server daemon (139.178.68.195:49052). Oct 9 03:29:41.422564 sshd[7104]: Accepted publickey for core from 139.178.68.195 port 49052 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:41.424700 sshd[7104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:41.430501 systemd-logind[1475]: New session 65 of user core. Oct 9 03:29:41.435642 systemd[1]: Started session-65.scope - Session 65 of User core. Oct 9 03:29:42.175677 sshd[7104]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:42.180116 systemd[1]: sshd@64-188.245.48.63:22-139.178.68.195:49052.service: Deactivated successfully. Oct 9 03:29:42.183243 systemd[1]: session-65.scope: Deactivated successfully. Oct 9 03:29:42.184251 systemd-logind[1475]: Session 65 logged out. Waiting for processes to exit. Oct 9 03:29:42.185503 systemd-logind[1475]: Removed session 65. Oct 9 03:29:47.351684 systemd[1]: Started sshd@65-188.245.48.63:22-139.178.68.195:57730.service - OpenSSH per-connection server daemon (139.178.68.195:57730). Oct 9 03:29:48.338486 sshd[7143]: Accepted publickey for core from 139.178.68.195 port 57730 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:48.340162 sshd[7143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:48.345210 systemd-logind[1475]: New session 66 of user core. Oct 9 03:29:48.348592 systemd[1]: Started session-66.scope - Session 66 of User core. Oct 9 03:29:49.088738 sshd[7143]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:49.093666 systemd[1]: sshd@65-188.245.48.63:22-139.178.68.195:57730.service: Deactivated successfully. Oct 9 03:29:49.095672 systemd[1]: session-66.scope: Deactivated successfully. Oct 9 03:29:49.096393 systemd-logind[1475]: Session 66 logged out. Waiting for processes to exit. Oct 9 03:29:49.097588 systemd-logind[1475]: Removed session 66. Oct 9 03:29:54.269767 systemd[1]: Started sshd@66-188.245.48.63:22-139.178.68.195:60492.service - OpenSSH per-connection server daemon (139.178.68.195:60492). Oct 9 03:29:55.279930 sshd[7160]: Accepted publickey for core from 139.178.68.195 port 60492 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:29:55.281729 sshd[7160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:29:55.287120 systemd-logind[1475]: New session 67 of user core. Oct 9 03:29:55.293608 systemd[1]: Started session-67.scope - Session 67 of User core. Oct 9 03:29:56.025556 sshd[7160]: pam_unix(sshd:session): session closed for user core Oct 9 03:29:56.029371 systemd-logind[1475]: Session 67 logged out. Waiting for processes to exit. Oct 9 03:29:56.030181 systemd[1]: sshd@66-188.245.48.63:22-139.178.68.195:60492.service: Deactivated successfully. Oct 9 03:29:56.032599 systemd[1]: session-67.scope: Deactivated successfully. Oct 9 03:29:56.033690 systemd-logind[1475]: Removed session 67. Oct 9 03:30:00.712611 systemd[1]: run-containerd-runc-k8s.io-9595fba1afde3eafe848764a995ba9ab87c856ee6896191a6bb31859a60c225f-runc.F8cbyZ.mount: Deactivated successfully. Oct 9 03:30:01.201744 systemd[1]: Started sshd@67-188.245.48.63:22-139.178.68.195:53754.service - OpenSSH per-connection server daemon (139.178.68.195:53754). Oct 9 03:30:02.201734 sshd[7199]: Accepted publickey for core from 139.178.68.195 port 53754 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:02.203383 sshd[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:02.208101 systemd-logind[1475]: New session 68 of user core. Oct 9 03:30:02.213574 systemd[1]: Started session-68.scope - Session 68 of User core. Oct 9 03:30:02.951710 sshd[7199]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:02.955778 systemd[1]: sshd@67-188.245.48.63:22-139.178.68.195:53754.service: Deactivated successfully. Oct 9 03:30:02.958086 systemd[1]: session-68.scope: Deactivated successfully. Oct 9 03:30:02.960176 systemd-logind[1475]: Session 68 logged out. Waiting for processes to exit. Oct 9 03:30:02.962086 systemd-logind[1475]: Removed session 68. Oct 9 03:30:05.498401 systemd[1]: Started sshd@68-188.245.48.63:22-80.64.30.139:22728.service - OpenSSH per-connection server daemon (80.64.30.139:22728). Oct 9 03:30:06.235920 sshd[7237]: Connection closed by authenticating user root 80.64.30.139 port 22728 [preauth] Oct 9 03:30:06.238090 systemd[1]: sshd@68-188.245.48.63:22-80.64.30.139:22728.service: Deactivated successfully. Oct 9 03:30:08.129663 systemd[1]: Started sshd@69-188.245.48.63:22-139.178.68.195:53768.service - OpenSSH per-connection server daemon (139.178.68.195:53768). Oct 9 03:30:09.113458 sshd[7242]: Accepted publickey for core from 139.178.68.195 port 53768 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:09.115474 sshd[7242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:09.120352 systemd-logind[1475]: New session 69 of user core. Oct 9 03:30:09.124568 systemd[1]: Started session-69.scope - Session 69 of User core. Oct 9 03:30:09.908468 sshd[7242]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:09.912709 systemd[1]: sshd@69-188.245.48.63:22-139.178.68.195:53768.service: Deactivated successfully. Oct 9 03:30:09.915025 systemd[1]: session-69.scope: Deactivated successfully. Oct 9 03:30:09.916314 systemd-logind[1475]: Session 69 logged out. Waiting for processes to exit. Oct 9 03:30:09.917417 systemd-logind[1475]: Removed session 69. Oct 9 03:30:15.079741 systemd[1]: Started sshd@70-188.245.48.63:22-139.178.68.195:47478.service - OpenSSH per-connection server daemon (139.178.68.195:47478). Oct 9 03:30:16.079486 sshd[7260]: Accepted publickey for core from 139.178.68.195 port 47478 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:16.081268 sshd[7260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:16.085757 systemd-logind[1475]: New session 70 of user core. Oct 9 03:30:16.091562 systemd[1]: Started session-70.scope - Session 70 of User core. Oct 9 03:30:16.843489 sshd[7260]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:16.847508 systemd[1]: sshd@70-188.245.48.63:22-139.178.68.195:47478.service: Deactivated successfully. Oct 9 03:30:16.849850 systemd[1]: session-70.scope: Deactivated successfully. Oct 9 03:30:16.850562 systemd-logind[1475]: Session 70 logged out. Waiting for processes to exit. Oct 9 03:30:16.851617 systemd-logind[1475]: Removed session 70. Oct 9 03:30:22.022792 systemd[1]: Started sshd@71-188.245.48.63:22-139.178.68.195:46304.service - OpenSSH per-connection server daemon (139.178.68.195:46304). Oct 9 03:30:23.005776 sshd[7276]: Accepted publickey for core from 139.178.68.195 port 46304 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:23.007922 sshd[7276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:23.012161 systemd-logind[1475]: New session 71 of user core. Oct 9 03:30:23.023600 systemd[1]: Started session-71.scope - Session 71 of User core. Oct 9 03:30:23.766089 sshd[7276]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:23.769790 systemd-logind[1475]: Session 71 logged out. Waiting for processes to exit. Oct 9 03:30:23.770701 systemd[1]: sshd@71-188.245.48.63:22-139.178.68.195:46304.service: Deactivated successfully. Oct 9 03:30:23.772965 systemd[1]: session-71.scope: Deactivated successfully. Oct 9 03:30:23.774677 systemd-logind[1475]: Removed session 71. Oct 9 03:30:28.945908 systemd[1]: Started sshd@72-188.245.48.63:22-139.178.68.195:46310.service - OpenSSH per-connection server daemon (139.178.68.195:46310). Oct 9 03:30:29.949807 sshd[7295]: Accepted publickey for core from 139.178.68.195 port 46310 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:29.951336 sshd[7295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:29.955622 systemd-logind[1475]: New session 72 of user core. Oct 9 03:30:29.959567 systemd[1]: Started session-72.scope - Session 72 of User core. Oct 9 03:30:30.711595 sshd[7295]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:30.716996 systemd[1]: sshd@72-188.245.48.63:22-139.178.68.195:46310.service: Deactivated successfully. Oct 9 03:30:30.720202 systemd[1]: session-72.scope: Deactivated successfully. Oct 9 03:30:30.721117 systemd-logind[1475]: Session 72 logged out. Waiting for processes to exit. Oct 9 03:30:30.722193 systemd-logind[1475]: Removed session 72. Oct 9 03:30:35.891747 systemd[1]: Started sshd@73-188.245.48.63:22-139.178.68.195:47634.service - OpenSSH per-connection server daemon (139.178.68.195:47634). Oct 9 03:30:36.893823 sshd[7370]: Accepted publickey for core from 139.178.68.195 port 47634 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:36.895577 sshd[7370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:36.899293 systemd-logind[1475]: New session 73 of user core. Oct 9 03:30:36.905560 systemd[1]: Started session-73.scope - Session 73 of User core. Oct 9 03:30:37.680557 sshd[7370]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:37.684517 systemd[1]: sshd@73-188.245.48.63:22-139.178.68.195:47634.service: Deactivated successfully. Oct 9 03:30:37.686733 systemd[1]: session-73.scope: Deactivated successfully. Oct 9 03:30:37.687406 systemd-logind[1475]: Session 73 logged out. Waiting for processes to exit. Oct 9 03:30:37.688410 systemd-logind[1475]: Removed session 73. Oct 9 03:30:42.859897 systemd[1]: Started sshd@74-188.245.48.63:22-139.178.68.195:49902.service - OpenSSH per-connection server daemon (139.178.68.195:49902). Oct 9 03:30:43.898082 sshd[7384]: Accepted publickey for core from 139.178.68.195 port 49902 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:43.901191 sshd[7384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:43.905493 systemd-logind[1475]: New session 74 of user core. Oct 9 03:30:43.913604 systemd[1]: Started session-74.scope - Session 74 of User core. Oct 9 03:30:44.432764 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.mDWLsX.mount: Deactivated successfully. Oct 9 03:30:44.787487 sshd[7384]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:44.793676 systemd[1]: sshd@74-188.245.48.63:22-139.178.68.195:49902.service: Deactivated successfully. Oct 9 03:30:44.797207 systemd[1]: session-74.scope: Deactivated successfully. Oct 9 03:30:44.798604 systemd-logind[1475]: Session 74 logged out. Waiting for processes to exit. Oct 9 03:30:44.799834 systemd-logind[1475]: Removed session 74. Oct 9 03:30:49.959225 systemd[1]: Started sshd@75-188.245.48.63:22-139.178.68.195:49906.service - OpenSSH per-connection server daemon (139.178.68.195:49906). Oct 9 03:30:50.954162 sshd[7422]: Accepted publickey for core from 139.178.68.195 port 49906 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:50.955783 sshd[7422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:50.960369 systemd-logind[1475]: New session 75 of user core. Oct 9 03:30:50.966618 systemd[1]: Started session-75.scope - Session 75 of User core. Oct 9 03:30:51.712396 sshd[7422]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:51.717176 systemd-logind[1475]: Session 75 logged out. Waiting for processes to exit. Oct 9 03:30:51.717891 systemd[1]: sshd@75-188.245.48.63:22-139.178.68.195:49906.service: Deactivated successfully. Oct 9 03:30:51.720756 systemd[1]: session-75.scope: Deactivated successfully. Oct 9 03:30:51.721905 systemd-logind[1475]: Removed session 75. Oct 9 03:30:56.883456 systemd[1]: Started sshd@76-188.245.48.63:22-139.178.68.195:51492.service - OpenSSH per-connection server daemon (139.178.68.195:51492). Oct 9 03:30:57.882463 sshd[7435]: Accepted publickey for core from 139.178.68.195 port 51492 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:30:57.884076 sshd[7435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:30:57.887869 systemd-logind[1475]: New session 76 of user core. Oct 9 03:30:57.894553 systemd[1]: Started session-76.scope - Session 76 of User core. Oct 9 03:30:58.638691 sshd[7435]: pam_unix(sshd:session): session closed for user core Oct 9 03:30:58.642724 systemd[1]: sshd@76-188.245.48.63:22-139.178.68.195:51492.service: Deactivated successfully. Oct 9 03:30:58.644818 systemd[1]: session-76.scope: Deactivated successfully. Oct 9 03:30:58.645680 systemd-logind[1475]: Session 76 logged out. Waiting for processes to exit. Oct 9 03:30:58.646858 systemd-logind[1475]: Removed session 76. Oct 9 03:31:03.822634 systemd[1]: Started sshd@77-188.245.48.63:22-139.178.68.195:40108.service - OpenSSH per-connection server daemon (139.178.68.195:40108). Oct 9 03:31:04.874769 sshd[7495]: Accepted publickey for core from 139.178.68.195 port 40108 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:04.876552 sshd[7495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:04.880863 systemd-logind[1475]: New session 77 of user core. Oct 9 03:31:04.891615 systemd[1]: Started session-77.scope - Session 77 of User core. Oct 9 03:31:05.725209 sshd[7495]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:05.728038 systemd[1]: sshd@77-188.245.48.63:22-139.178.68.195:40108.service: Deactivated successfully. Oct 9 03:31:05.729758 systemd[1]: session-77.scope: Deactivated successfully. Oct 9 03:31:05.731248 systemd-logind[1475]: Session 77 logged out. Waiting for processes to exit. Oct 9 03:31:05.732911 systemd-logind[1475]: Removed session 77. Oct 9 03:31:10.903671 systemd[1]: Started sshd@78-188.245.48.63:22-139.178.68.195:46778.service - OpenSSH per-connection server daemon (139.178.68.195:46778). Oct 9 03:31:11.901711 sshd[7513]: Accepted publickey for core from 139.178.68.195 port 46778 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:11.903694 sshd[7513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:11.909107 systemd-logind[1475]: New session 78 of user core. Oct 9 03:31:11.911582 systemd[1]: Started session-78.scope - Session 78 of User core. Oct 9 03:31:12.668939 sshd[7513]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:12.672738 systemd[1]: sshd@78-188.245.48.63:22-139.178.68.195:46778.service: Deactivated successfully. Oct 9 03:31:12.675988 systemd[1]: session-78.scope: Deactivated successfully. Oct 9 03:31:12.677916 systemd-logind[1475]: Session 78 logged out. Waiting for processes to exit. Oct 9 03:31:12.679330 systemd-logind[1475]: Removed session 78. Oct 9 03:31:17.839185 systemd[1]: Started sshd@79-188.245.48.63:22-139.178.68.195:46792.service - OpenSSH per-connection server daemon (139.178.68.195:46792). Oct 9 03:31:18.838683 sshd[7529]: Accepted publickey for core from 139.178.68.195 port 46792 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:18.840369 sshd[7529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:18.845220 systemd-logind[1475]: New session 79 of user core. Oct 9 03:31:18.850583 systemd[1]: Started session-79.scope - Session 79 of User core. Oct 9 03:31:19.615071 sshd[7529]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:19.618451 systemd[1]: sshd@79-188.245.48.63:22-139.178.68.195:46792.service: Deactivated successfully. Oct 9 03:31:19.620875 systemd[1]: session-79.scope: Deactivated successfully. Oct 9 03:31:19.622742 systemd-logind[1475]: Session 79 logged out. Waiting for processes to exit. Oct 9 03:31:19.624022 systemd-logind[1475]: Removed session 79. Oct 9 03:31:24.788508 systemd[1]: Started sshd@80-188.245.48.63:22-139.178.68.195:45046.service - OpenSSH per-connection server daemon (139.178.68.195:45046). Oct 9 03:31:25.787478 sshd[7548]: Accepted publickey for core from 139.178.68.195 port 45046 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:25.788918 sshd[7548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:25.794594 systemd-logind[1475]: New session 80 of user core. Oct 9 03:31:25.798553 systemd[1]: Started session-80.scope - Session 80 of User core. Oct 9 03:31:26.537947 sshd[7548]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:26.541948 systemd[1]: sshd@80-188.245.48.63:22-139.178.68.195:45046.service: Deactivated successfully. Oct 9 03:31:26.544691 systemd[1]: session-80.scope: Deactivated successfully. Oct 9 03:31:26.546312 systemd-logind[1475]: Session 80 logged out. Waiting for processes to exit. Oct 9 03:31:26.547848 systemd-logind[1475]: Removed session 80. Oct 9 03:31:26.714661 systemd[1]: Started sshd@81-188.245.48.63:22-139.178.68.195:45052.service - OpenSSH per-connection server daemon (139.178.68.195:45052). Oct 9 03:31:27.702551 sshd[7563]: Accepted publickey for core from 139.178.68.195 port 45052 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:27.704169 sshd[7563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:27.709297 systemd-logind[1475]: New session 81 of user core. Oct 9 03:31:27.714633 systemd[1]: Started session-81.scope - Session 81 of User core. Oct 9 03:31:28.593587 sshd[7563]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:28.600837 systemd[1]: sshd@81-188.245.48.63:22-139.178.68.195:45052.service: Deactivated successfully. Oct 9 03:31:28.603345 systemd[1]: session-81.scope: Deactivated successfully. Oct 9 03:31:28.604863 systemd-logind[1475]: Session 81 logged out. Waiting for processes to exit. Oct 9 03:31:28.606110 systemd-logind[1475]: Removed session 81. Oct 9 03:31:28.774767 systemd[1]: Started sshd@82-188.245.48.63:22-139.178.68.195:45064.service - OpenSSH per-connection server daemon (139.178.68.195:45064). Oct 9 03:31:29.782802 sshd[7579]: Accepted publickey for core from 139.178.68.195 port 45064 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:29.784617 sshd[7579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:29.789406 systemd-logind[1475]: New session 82 of user core. Oct 9 03:31:29.795614 systemd[1]: Started session-82.scope - Session 82 of User core. Oct 9 03:31:32.175555 sshd[7579]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:32.180704 systemd[1]: sshd@82-188.245.48.63:22-139.178.68.195:45064.service: Deactivated successfully. Oct 9 03:31:32.182892 systemd[1]: session-82.scope: Deactivated successfully. Oct 9 03:31:32.184684 systemd-logind[1475]: Session 82 logged out. Waiting for processes to exit. Oct 9 03:31:32.186508 systemd-logind[1475]: Removed session 82. Oct 9 03:31:32.352401 systemd[1]: Started sshd@83-188.245.48.63:22-139.178.68.195:60104.service - OpenSSH per-connection server daemon (139.178.68.195:60104). Oct 9 03:31:33.440231 sshd[7628]: Accepted publickey for core from 139.178.68.195 port 60104 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:33.443284 sshd[7628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:33.448022 systemd-logind[1475]: New session 83 of user core. Oct 9 03:31:33.453600 systemd[1]: Started session-83.scope - Session 83 of User core. Oct 9 03:31:33.542592 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.8oRYdo.mount: Deactivated successfully. Oct 9 03:31:34.777210 sshd[7628]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:34.780521 systemd[1]: sshd@83-188.245.48.63:22-139.178.68.195:60104.service: Deactivated successfully. Oct 9 03:31:34.782948 systemd[1]: session-83.scope: Deactivated successfully. Oct 9 03:31:34.785231 systemd-logind[1475]: Session 83 logged out. Waiting for processes to exit. Oct 9 03:31:34.786572 systemd-logind[1475]: Removed session 83. Oct 9 03:31:34.953864 systemd[1]: Started sshd@84-188.245.48.63:22-139.178.68.195:60110.service - OpenSSH per-connection server daemon (139.178.68.195:60110). Oct 9 03:31:35.966492 sshd[7657]: Accepted publickey for core from 139.178.68.195 port 60110 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:35.968191 sshd[7657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:35.973082 systemd-logind[1475]: New session 84 of user core. Oct 9 03:31:35.980566 systemd[1]: Started session-84.scope - Session 84 of User core. Oct 9 03:31:36.729105 sshd[7657]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:36.734699 systemd[1]: sshd@84-188.245.48.63:22-139.178.68.195:60110.service: Deactivated successfully. Oct 9 03:31:36.738216 systemd[1]: session-84.scope: Deactivated successfully. Oct 9 03:31:36.739140 systemd-logind[1475]: Session 84 logged out. Waiting for processes to exit. Oct 9 03:31:36.740287 systemd-logind[1475]: Removed session 84. Oct 9 03:31:41.903704 systemd[1]: Started sshd@85-188.245.48.63:22-139.178.68.195:43992.service - OpenSSH per-connection server daemon (139.178.68.195:43992). Oct 9 03:31:42.889938 sshd[7675]: Accepted publickey for core from 139.178.68.195 port 43992 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:42.891497 sshd[7675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:42.896494 systemd-logind[1475]: New session 85 of user core. Oct 9 03:31:42.903615 systemd[1]: Started session-85.scope - Session 85 of User core. Oct 9 03:31:43.638060 sshd[7675]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:43.641595 systemd-logind[1475]: Session 85 logged out. Waiting for processes to exit. Oct 9 03:31:43.642290 systemd[1]: sshd@85-188.245.48.63:22-139.178.68.195:43992.service: Deactivated successfully. Oct 9 03:31:43.644257 systemd[1]: session-85.scope: Deactivated successfully. Oct 9 03:31:43.645163 systemd-logind[1475]: Removed session 85. Oct 9 03:31:48.810674 systemd[1]: Started sshd@86-188.245.48.63:22-139.178.68.195:44008.service - OpenSSH per-connection server daemon (139.178.68.195:44008). Oct 9 03:31:49.804875 sshd[7711]: Accepted publickey for core from 139.178.68.195 port 44008 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:49.805787 sshd[7711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:49.809962 systemd-logind[1475]: New session 86 of user core. Oct 9 03:31:49.814585 systemd[1]: Started session-86.scope - Session 86 of User core. Oct 9 03:31:50.555549 sshd[7711]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:50.558730 systemd[1]: sshd@86-188.245.48.63:22-139.178.68.195:44008.service: Deactivated successfully. Oct 9 03:31:50.561911 systemd[1]: session-86.scope: Deactivated successfully. Oct 9 03:31:50.562865 systemd-logind[1475]: Session 86 logged out. Waiting for processes to exit. Oct 9 03:31:50.563959 systemd-logind[1475]: Removed session 86. Oct 9 03:31:55.734860 systemd[1]: Started sshd@87-188.245.48.63:22-139.178.68.195:50378.service - OpenSSH per-connection server daemon (139.178.68.195:50378). Oct 9 03:31:56.783049 sshd[7734]: Accepted publickey for core from 139.178.68.195 port 50378 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:31:56.787555 sshd[7734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:31:56.798477 systemd-logind[1475]: New session 87 of user core. Oct 9 03:31:56.805742 systemd[1]: Started session-87.scope - Session 87 of User core. Oct 9 03:31:57.580066 sshd[7734]: pam_unix(sshd:session): session closed for user core Oct 9 03:31:57.588239 systemd[1]: sshd@87-188.245.48.63:22-139.178.68.195:50378.service: Deactivated successfully. Oct 9 03:31:57.594322 systemd[1]: session-87.scope: Deactivated successfully. Oct 9 03:31:57.596410 systemd-logind[1475]: Session 87 logged out. Waiting for processes to exit. Oct 9 03:31:57.598872 systemd-logind[1475]: Removed session 87. Oct 9 03:32:02.756945 systemd[1]: Started sshd@88-188.245.48.63:22-139.178.68.195:47890.service - OpenSSH per-connection server daemon (139.178.68.195:47890). Oct 9 03:32:03.498138 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.2mP3Gc.mount: Deactivated successfully. Oct 9 03:32:03.744297 sshd[7774]: Accepted publickey for core from 139.178.68.195 port 47890 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:03.746108 sshd[7774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:03.751099 systemd-logind[1475]: New session 88 of user core. Oct 9 03:32:03.758689 systemd[1]: Started session-88.scope - Session 88 of User core. Oct 9 03:32:04.503934 sshd[7774]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:04.509137 systemd[1]: sshd@88-188.245.48.63:22-139.178.68.195:47890.service: Deactivated successfully. Oct 9 03:32:04.511357 systemd[1]: session-88.scope: Deactivated successfully. Oct 9 03:32:04.512300 systemd-logind[1475]: Session 88 logged out. Waiting for processes to exit. Oct 9 03:32:04.513855 systemd-logind[1475]: Removed session 88. Oct 9 03:32:09.685700 systemd[1]: Started sshd@89-188.245.48.63:22-139.178.68.195:47898.service - OpenSSH per-connection server daemon (139.178.68.195:47898). Oct 9 03:32:10.726728 sshd[7820]: Accepted publickey for core from 139.178.68.195 port 47898 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:10.728657 sshd[7820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:10.733225 systemd-logind[1475]: New session 89 of user core. Oct 9 03:32:10.738574 systemd[1]: Started session-89.scope - Session 89 of User core. Oct 9 03:32:11.526984 sshd[7820]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:11.530991 systemd-logind[1475]: Session 89 logged out. Waiting for processes to exit. Oct 9 03:32:11.531914 systemd[1]: sshd@89-188.245.48.63:22-139.178.68.195:47898.service: Deactivated successfully. Oct 9 03:32:11.534290 systemd[1]: session-89.scope: Deactivated successfully. Oct 9 03:32:11.535806 systemd-logind[1475]: Removed session 89. Oct 9 03:32:16.695768 systemd[1]: Started sshd@90-188.245.48.63:22-139.178.68.195:35552.service - OpenSSH per-connection server daemon (139.178.68.195:35552). Oct 9 03:32:17.701401 sshd[7842]: Accepted publickey for core from 139.178.68.195 port 35552 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:17.703126 sshd[7842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:17.707532 systemd-logind[1475]: New session 90 of user core. Oct 9 03:32:17.711562 systemd[1]: Started session-90.scope - Session 90 of User core. Oct 9 03:32:18.458287 sshd[7842]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:18.461189 systemd[1]: sshd@90-188.245.48.63:22-139.178.68.195:35552.service: Deactivated successfully. Oct 9 03:32:18.463204 systemd[1]: session-90.scope: Deactivated successfully. Oct 9 03:32:18.464751 systemd-logind[1475]: Session 90 logged out. Waiting for processes to exit. Oct 9 03:32:18.465895 systemd-logind[1475]: Removed session 90. Oct 9 03:32:23.633988 systemd[1]: Started sshd@91-188.245.48.63:22-139.178.68.195:60886.service - OpenSSH per-connection server daemon (139.178.68.195:60886). Oct 9 03:32:24.646835 sshd[7860]: Accepted publickey for core from 139.178.68.195 port 60886 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:24.649672 sshd[7860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:24.654130 systemd-logind[1475]: New session 91 of user core. Oct 9 03:32:24.660573 systemd[1]: Started session-91.scope - Session 91 of User core. Oct 9 03:32:25.395545 sshd[7860]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:25.400205 systemd-logind[1475]: Session 91 logged out. Waiting for processes to exit. Oct 9 03:32:25.400534 systemd[1]: sshd@91-188.245.48.63:22-139.178.68.195:60886.service: Deactivated successfully. Oct 9 03:32:25.403660 systemd[1]: session-91.scope: Deactivated successfully. Oct 9 03:32:25.404639 systemd-logind[1475]: Removed session 91. Oct 9 03:32:30.572660 systemd[1]: Started sshd@92-188.245.48.63:22-139.178.68.195:60888.service - OpenSSH per-connection server daemon (139.178.68.195:60888). Oct 9 03:32:31.584761 sshd[7873]: Accepted publickey for core from 139.178.68.195 port 60888 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:31.586399 sshd[7873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:31.591137 systemd-logind[1475]: New session 92 of user core. Oct 9 03:32:31.600601 systemd[1]: Started session-92.scope - Session 92 of User core. Oct 9 03:32:32.364465 sshd[7873]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:32.367815 systemd[1]: sshd@92-188.245.48.63:22-139.178.68.195:60888.service: Deactivated successfully. Oct 9 03:32:32.370157 systemd[1]: session-92.scope: Deactivated successfully. Oct 9 03:32:32.373017 systemd-logind[1475]: Session 92 logged out. Waiting for processes to exit. Oct 9 03:32:32.374152 systemd-logind[1475]: Removed session 92. Oct 9 03:32:37.547641 systemd[1]: Started sshd@93-188.245.48.63:22-139.178.68.195:52210.service - OpenSSH per-connection server daemon (139.178.68.195:52210). Oct 9 03:32:38.539870 sshd[7936]: Accepted publickey for core from 139.178.68.195 port 52210 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:38.541644 sshd[7936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:38.545722 systemd-logind[1475]: New session 93 of user core. Oct 9 03:32:38.550612 systemd[1]: Started session-93.scope - Session 93 of User core. Oct 9 03:32:39.302205 sshd[7936]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:39.306780 systemd[1]: sshd@93-188.245.48.63:22-139.178.68.195:52210.service: Deactivated successfully. Oct 9 03:32:39.309532 systemd[1]: session-93.scope: Deactivated successfully. Oct 9 03:32:39.310372 systemd-logind[1475]: Session 93 logged out. Waiting for processes to exit. Oct 9 03:32:39.312048 systemd-logind[1475]: Removed session 93. Oct 9 03:32:44.477503 systemd[1]: Started sshd@94-188.245.48.63:22-139.178.68.195:52674.service - OpenSSH per-connection server daemon (139.178.68.195:52674). Oct 9 03:32:45.465736 sshd[7973]: Accepted publickey for core from 139.178.68.195 port 52674 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:45.467454 sshd[7973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:45.472337 systemd-logind[1475]: New session 94 of user core. Oct 9 03:32:45.475605 systemd[1]: Started session-94.scope - Session 94 of User core. Oct 9 03:32:46.210152 sshd[7973]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:46.213010 systemd[1]: sshd@94-188.245.48.63:22-139.178.68.195:52674.service: Deactivated successfully. Oct 9 03:32:46.215099 systemd[1]: session-94.scope: Deactivated successfully. Oct 9 03:32:46.217068 systemd-logind[1475]: Session 94 logged out. Waiting for processes to exit. Oct 9 03:32:46.218204 systemd-logind[1475]: Removed session 94. Oct 9 03:32:51.381372 systemd[1]: Started sshd@95-188.245.48.63:22-139.178.68.195:43468.service - OpenSSH per-connection server daemon (139.178.68.195:43468). Oct 9 03:32:52.379352 sshd[7988]: Accepted publickey for core from 139.178.68.195 port 43468 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:52.382497 sshd[7988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:52.390019 systemd-logind[1475]: New session 95 of user core. Oct 9 03:32:52.396693 systemd[1]: Started session-95.scope - Session 95 of User core. Oct 9 03:32:53.141888 sshd[7988]: pam_unix(sshd:session): session closed for user core Oct 9 03:32:53.150345 systemd[1]: sshd@95-188.245.48.63:22-139.178.68.195:43468.service: Deactivated successfully. Oct 9 03:32:53.155343 systemd[1]: session-95.scope: Deactivated successfully. Oct 9 03:32:53.156462 systemd-logind[1475]: Session 95 logged out. Waiting for processes to exit. Oct 9 03:32:53.158861 systemd-logind[1475]: Removed session 95. Oct 9 03:32:58.315691 systemd[1]: Started sshd@96-188.245.48.63:22-139.178.68.195:43480.service - OpenSSH per-connection server daemon (139.178.68.195:43480). Oct 9 03:32:59.301375 sshd[8006]: Accepted publickey for core from 139.178.68.195 port 43480 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:32:59.303541 sshd[8006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:32:59.308339 systemd-logind[1475]: New session 96 of user core. Oct 9 03:32:59.312583 systemd[1]: Started session-96.scope - Session 96 of User core. Oct 9 03:33:00.048310 sshd[8006]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:00.050922 systemd[1]: sshd@96-188.245.48.63:22-139.178.68.195:43480.service: Deactivated successfully. Oct 9 03:33:00.052957 systemd[1]: session-96.scope: Deactivated successfully. Oct 9 03:33:00.054620 systemd-logind[1475]: Session 96 logged out. Waiting for processes to exit. Oct 9 03:33:00.055595 systemd-logind[1475]: Removed session 96. Oct 9 03:33:05.223684 systemd[1]: Started sshd@97-188.245.48.63:22-139.178.68.195:50288.service - OpenSSH per-connection server daemon (139.178.68.195:50288). Oct 9 03:33:06.220920 sshd[8068]: Accepted publickey for core from 139.178.68.195 port 50288 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:06.222876 sshd[8068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:06.227471 systemd-logind[1475]: New session 97 of user core. Oct 9 03:33:06.231582 systemd[1]: Started session-97.scope - Session 97 of User core. Oct 9 03:33:06.974251 sshd[8068]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:06.978321 systemd-logind[1475]: Session 97 logged out. Waiting for processes to exit. Oct 9 03:33:06.979214 systemd[1]: sshd@97-188.245.48.63:22-139.178.68.195:50288.service: Deactivated successfully. Oct 9 03:33:06.981329 systemd[1]: session-97.scope: Deactivated successfully. Oct 9 03:33:06.982573 systemd-logind[1475]: Removed session 97. Oct 9 03:33:12.159015 systemd[1]: Started sshd@98-188.245.48.63:22-139.178.68.195:37780.service - OpenSSH per-connection server daemon (139.178.68.195:37780). Oct 9 03:33:13.181019 sshd[8081]: Accepted publickey for core from 139.178.68.195 port 37780 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:13.182775 sshd[8081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:13.187731 systemd-logind[1475]: New session 98 of user core. Oct 9 03:33:13.192597 systemd[1]: Started session-98.scope - Session 98 of User core. Oct 9 03:33:13.932367 sshd[8081]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:13.935474 systemd[1]: sshd@98-188.245.48.63:22-139.178.68.195:37780.service: Deactivated successfully. Oct 9 03:33:13.937576 systemd[1]: session-98.scope: Deactivated successfully. Oct 9 03:33:13.939126 systemd-logind[1475]: Session 98 logged out. Waiting for processes to exit. Oct 9 03:33:13.941172 systemd-logind[1475]: Removed session 98. Oct 9 03:33:19.107699 systemd[1]: Started sshd@99-188.245.48.63:22-139.178.68.195:37794.service - OpenSSH per-connection server daemon (139.178.68.195:37794). Oct 9 03:33:20.091064 sshd[8101]: Accepted publickey for core from 139.178.68.195 port 37794 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:20.092852 sshd[8101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:20.097870 systemd-logind[1475]: New session 99 of user core. Oct 9 03:33:20.104596 systemd[1]: Started session-99.scope - Session 99 of User core. Oct 9 03:33:20.834902 sshd[8101]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:20.839040 systemd[1]: sshd@99-188.245.48.63:22-139.178.68.195:37794.service: Deactivated successfully. Oct 9 03:33:20.841893 systemd[1]: session-99.scope: Deactivated successfully. Oct 9 03:33:20.842792 systemd-logind[1475]: Session 99 logged out. Waiting for processes to exit. Oct 9 03:33:20.843996 systemd-logind[1475]: Removed session 99. Oct 9 03:33:26.011675 systemd[1]: Started sshd@100-188.245.48.63:22-139.178.68.195:42952.service - OpenSSH per-connection server daemon (139.178.68.195:42952). Oct 9 03:33:27.007012 sshd[8119]: Accepted publickey for core from 139.178.68.195 port 42952 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:27.009633 sshd[8119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:27.016912 systemd-logind[1475]: New session 100 of user core. Oct 9 03:33:27.024653 systemd[1]: Started session-100.scope - Session 100 of User core. Oct 9 03:33:27.762653 sshd[8119]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:27.767968 systemd[1]: sshd@100-188.245.48.63:22-139.178.68.195:42952.service: Deactivated successfully. Oct 9 03:33:27.770289 systemd[1]: session-100.scope: Deactivated successfully. Oct 9 03:33:27.771339 systemd-logind[1475]: Session 100 logged out. Waiting for processes to exit. Oct 9 03:33:27.772489 systemd-logind[1475]: Removed session 100. Oct 9 03:33:32.944702 systemd[1]: Started sshd@101-188.245.48.63:22-139.178.68.195:36842.service - OpenSSH per-connection server daemon (139.178.68.195:36842). Oct 9 03:33:33.497607 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.v1N4dw.mount: Deactivated successfully. Oct 9 03:33:33.934870 sshd[8156]: Accepted publickey for core from 139.178.68.195 port 36842 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:33.936588 sshd[8156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:33.941066 systemd-logind[1475]: New session 101 of user core. Oct 9 03:33:33.947604 systemd[1]: Started session-101.scope - Session 101 of User core. Oct 9 03:33:34.685154 sshd[8156]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:34.688713 systemd-logind[1475]: Session 101 logged out. Waiting for processes to exit. Oct 9 03:33:34.689547 systemd[1]: sshd@101-188.245.48.63:22-139.178.68.195:36842.service: Deactivated successfully. Oct 9 03:33:34.691473 systemd[1]: session-101.scope: Deactivated successfully. Oct 9 03:33:34.692403 systemd-logind[1475]: Removed session 101. Oct 9 03:33:39.860816 systemd[1]: Started sshd@102-188.245.48.63:22-139.178.68.195:36844.service - OpenSSH per-connection server daemon (139.178.68.195:36844). Oct 9 03:33:40.851473 sshd[8205]: Accepted publickey for core from 139.178.68.195 port 36844 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:40.853035 sshd[8205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:40.856838 systemd-logind[1475]: New session 102 of user core. Oct 9 03:33:40.861580 systemd[1]: Started session-102.scope - Session 102 of User core. Oct 9 03:33:41.594212 sshd[8205]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:41.598223 systemd[1]: sshd@102-188.245.48.63:22-139.178.68.195:36844.service: Deactivated successfully. Oct 9 03:33:41.600347 systemd[1]: session-102.scope: Deactivated successfully. Oct 9 03:33:41.601034 systemd-logind[1475]: Session 102 logged out. Waiting for processes to exit. Oct 9 03:33:41.602356 systemd-logind[1475]: Removed session 102. Oct 9 03:33:46.765071 systemd[1]: Started sshd@103-188.245.48.63:22-139.178.68.195:51188.service - OpenSSH per-connection server daemon (139.178.68.195:51188). Oct 9 03:33:47.759242 sshd[8244]: Accepted publickey for core from 139.178.68.195 port 51188 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:47.761290 sshd[8244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:47.765603 systemd-logind[1475]: New session 103 of user core. Oct 9 03:33:47.768568 systemd[1]: Started session-103.scope - Session 103 of User core. Oct 9 03:33:48.503068 sshd[8244]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:48.507356 systemd[1]: sshd@103-188.245.48.63:22-139.178.68.195:51188.service: Deactivated successfully. Oct 9 03:33:48.509720 systemd[1]: session-103.scope: Deactivated successfully. Oct 9 03:33:48.510408 systemd-logind[1475]: Session 103 logged out. Waiting for processes to exit. Oct 9 03:33:48.511561 systemd-logind[1475]: Removed session 103. Oct 9 03:33:53.679690 systemd[1]: Started sshd@104-188.245.48.63:22-139.178.68.195:44716.service - OpenSSH per-connection server daemon (139.178.68.195:44716). Oct 9 03:33:53.686027 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Oct 9 03:33:53.717905 systemd-tmpfiles[8259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 03:33:53.719646 systemd-tmpfiles[8259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 03:33:53.721045 systemd-tmpfiles[8259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 03:33:53.721581 systemd-tmpfiles[8259]: ACLs are not supported, ignoring. Oct 9 03:33:53.721792 systemd-tmpfiles[8259]: ACLs are not supported, ignoring. Oct 9 03:33:53.727132 systemd-tmpfiles[8259]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 03:33:53.727208 systemd-tmpfiles[8259]: Skipping /boot Oct 9 03:33:53.737914 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Oct 9 03:33:53.738182 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Oct 9 03:33:54.670281 sshd[8258]: Accepted publickey for core from 139.178.68.195 port 44716 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:33:54.672248 sshd[8258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:33:54.677278 systemd-logind[1475]: New session 104 of user core. Oct 9 03:33:54.682578 systemd[1]: Started session-104.scope - Session 104 of User core. Oct 9 03:33:55.411321 sshd[8258]: pam_unix(sshd:session): session closed for user core Oct 9 03:33:55.415358 systemd[1]: sshd@104-188.245.48.63:22-139.178.68.195:44716.service: Deactivated successfully. Oct 9 03:33:55.417810 systemd[1]: session-104.scope: Deactivated successfully. Oct 9 03:33:55.418670 systemd-logind[1475]: Session 104 logged out. Waiting for processes to exit. Oct 9 03:33:55.420107 systemd-logind[1475]: Removed session 104. Oct 9 03:34:00.603855 systemd[1]: Started sshd@105-188.245.48.63:22-139.178.68.195:44730.service - OpenSSH per-connection server daemon (139.178.68.195:44730). Oct 9 03:34:01.609273 sshd[8279]: Accepted publickey for core from 139.178.68.195 port 44730 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:01.611735 sshd[8279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:01.616318 systemd-logind[1475]: New session 105 of user core. Oct 9 03:34:01.619544 systemd[1]: Started session-105.scope - Session 105 of User core. Oct 9 03:34:02.369757 sshd[8279]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:02.372853 systemd[1]: sshd@105-188.245.48.63:22-139.178.68.195:44730.service: Deactivated successfully. Oct 9 03:34:02.375246 systemd[1]: session-105.scope: Deactivated successfully. Oct 9 03:34:02.377841 systemd-logind[1475]: Session 105 logged out. Waiting for processes to exit. Oct 9 03:34:02.378930 systemd-logind[1475]: Removed session 105. Oct 9 03:34:07.546704 systemd[1]: Started sshd@106-188.245.48.63:22-139.178.68.195:39956.service - OpenSSH per-connection server daemon (139.178.68.195:39956). Oct 9 03:34:08.552737 sshd[8339]: Accepted publickey for core from 139.178.68.195 port 39956 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:08.554598 sshd[8339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:08.559570 systemd-logind[1475]: New session 106 of user core. Oct 9 03:34:08.563591 systemd[1]: Started session-106.scope - Session 106 of User core. Oct 9 03:34:09.308020 sshd[8339]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:09.312010 systemd[1]: sshd@106-188.245.48.63:22-139.178.68.195:39956.service: Deactivated successfully. Oct 9 03:34:09.314483 systemd[1]: session-106.scope: Deactivated successfully. Oct 9 03:34:09.315180 systemd-logind[1475]: Session 106 logged out. Waiting for processes to exit. Oct 9 03:34:09.316329 systemd-logind[1475]: Removed session 106. Oct 9 03:34:14.484689 systemd[1]: Started sshd@107-188.245.48.63:22-139.178.68.195:53290.service - OpenSSH per-connection server daemon (139.178.68.195:53290). Oct 9 03:34:15.472954 sshd[8352]: Accepted publickey for core from 139.178.68.195 port 53290 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:15.474372 sshd[8352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:15.478140 systemd-logind[1475]: New session 107 of user core. Oct 9 03:34:15.483555 systemd[1]: Started session-107.scope - Session 107 of User core. Oct 9 03:34:16.212222 sshd[8352]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:16.215763 systemd-logind[1475]: Session 107 logged out. Waiting for processes to exit. Oct 9 03:34:16.216545 systemd[1]: sshd@107-188.245.48.63:22-139.178.68.195:53290.service: Deactivated successfully. Oct 9 03:34:16.218511 systemd[1]: session-107.scope: Deactivated successfully. Oct 9 03:34:16.219474 systemd-logind[1475]: Removed session 107. Oct 9 03:34:21.389346 systemd[1]: Started sshd@108-188.245.48.63:22-139.178.68.195:53354.service - OpenSSH per-connection server daemon (139.178.68.195:53354). Oct 9 03:34:22.379396 sshd[8371]: Accepted publickey for core from 139.178.68.195 port 53354 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:22.381144 sshd[8371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:22.385747 systemd-logind[1475]: New session 108 of user core. Oct 9 03:34:22.392581 systemd[1]: Started session-108.scope - Session 108 of User core. Oct 9 03:34:23.124504 sshd[8371]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:23.127408 systemd[1]: sshd@108-188.245.48.63:22-139.178.68.195:53354.service: Deactivated successfully. Oct 9 03:34:23.129604 systemd[1]: session-108.scope: Deactivated successfully. Oct 9 03:34:23.130927 systemd-logind[1475]: Session 108 logged out. Waiting for processes to exit. Oct 9 03:34:23.132454 systemd-logind[1475]: Removed session 108. Oct 9 03:34:28.301766 systemd[1]: Started sshd@109-188.245.48.63:22-139.178.68.195:53370.service - OpenSSH per-connection server daemon (139.178.68.195:53370). Oct 9 03:34:29.290446 sshd[8388]: Accepted publickey for core from 139.178.68.195 port 53370 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:29.292250 sshd[8388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:29.297487 systemd-logind[1475]: New session 109 of user core. Oct 9 03:34:29.301630 systemd[1]: Started session-109.scope - Session 109 of User core. Oct 9 03:34:30.043254 sshd[8388]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:30.046228 systemd[1]: sshd@109-188.245.48.63:22-139.178.68.195:53370.service: Deactivated successfully. Oct 9 03:34:30.048698 systemd[1]: session-109.scope: Deactivated successfully. Oct 9 03:34:30.050609 systemd-logind[1475]: Session 109 logged out. Waiting for processes to exit. Oct 9 03:34:30.051980 systemd-logind[1475]: Removed session 109. Oct 9 03:34:33.496912 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.JKhVbu.mount: Deactivated successfully. Oct 9 03:34:35.227326 systemd[1]: Started sshd@110-188.245.48.63:22-139.178.68.195:50112.service - OpenSSH per-connection server daemon (139.178.68.195:50112). Oct 9 03:34:36.227783 sshd[8443]: Accepted publickey for core from 139.178.68.195 port 50112 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:36.230410 sshd[8443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:36.235372 systemd-logind[1475]: New session 110 of user core. Oct 9 03:34:36.240554 systemd[1]: Started session-110.scope - Session 110 of User core. Oct 9 03:34:36.985025 sshd[8443]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:36.990334 systemd[1]: sshd@110-188.245.48.63:22-139.178.68.195:50112.service: Deactivated successfully. Oct 9 03:34:36.992505 systemd[1]: session-110.scope: Deactivated successfully. Oct 9 03:34:36.993320 systemd-logind[1475]: Session 110 logged out. Waiting for processes to exit. Oct 9 03:34:36.994523 systemd-logind[1475]: Removed session 110. Oct 9 03:34:39.218855 systemd[1]: Started sshd@111-188.245.48.63:22-119.147.211.61:49158.service - OpenSSH per-connection server daemon (119.147.211.61:49158). Oct 9 03:34:42.162709 systemd[1]: Started sshd@112-188.245.48.63:22-139.178.68.195:47112.service - OpenSSH per-connection server daemon (139.178.68.195:47112). Oct 9 03:34:43.150066 sshd[8463]: Accepted publickey for core from 139.178.68.195 port 47112 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:43.151877 sshd[8463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:43.156790 systemd-logind[1475]: New session 111 of user core. Oct 9 03:34:43.161610 systemd[1]: Started session-111.scope - Session 111 of User core. Oct 9 03:34:43.914993 sshd[8463]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:43.919863 systemd-logind[1475]: Session 111 logged out. Waiting for processes to exit. Oct 9 03:34:43.921003 systemd[1]: sshd@112-188.245.48.63:22-139.178.68.195:47112.service: Deactivated successfully. Oct 9 03:34:43.923734 systemd[1]: session-111.scope: Deactivated successfully. Oct 9 03:34:43.925196 systemd-logind[1475]: Removed session 111. Oct 9 03:34:49.085514 systemd[1]: Started sshd@113-188.245.48.63:22-139.178.68.195:47122.service - OpenSSH per-connection server daemon (139.178.68.195:47122). Oct 9 03:34:50.095618 sshd[8499]: Accepted publickey for core from 139.178.68.195 port 47122 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:50.097297 sshd[8499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:50.101944 systemd-logind[1475]: New session 112 of user core. Oct 9 03:34:50.106589 systemd[1]: Started session-112.scope - Session 112 of User core. Oct 9 03:34:50.855300 sshd[8499]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:50.859670 systemd[1]: sshd@113-188.245.48.63:22-139.178.68.195:47122.service: Deactivated successfully. Oct 9 03:34:50.863280 systemd[1]: session-112.scope: Deactivated successfully. Oct 9 03:34:50.867757 systemd-logind[1475]: Session 112 logged out. Waiting for processes to exit. Oct 9 03:34:50.869150 systemd-logind[1475]: Removed session 112. Oct 9 03:34:54.990637 sshd[8461]: banner exchange: Connection from 119.147.211.61 port 49158: invalid format Oct 9 03:34:54.991397 systemd[1]: sshd@111-188.245.48.63:22-119.147.211.61:49158.service: Deactivated successfully. Oct 9 03:34:56.032848 systemd[1]: Started sshd@114-188.245.48.63:22-139.178.68.195:46190.service - OpenSSH per-connection server daemon (139.178.68.195:46190). Oct 9 03:34:57.043247 sshd[8527]: Accepted publickey for core from 139.178.68.195 port 46190 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:34:57.045609 sshd[8527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:34:57.051002 systemd-logind[1475]: New session 113 of user core. Oct 9 03:34:57.059617 systemd[1]: Started session-113.scope - Session 113 of User core. Oct 9 03:34:57.838666 sshd[8527]: pam_unix(sshd:session): session closed for user core Oct 9 03:34:57.843380 systemd[1]: sshd@114-188.245.48.63:22-139.178.68.195:46190.service: Deactivated successfully. Oct 9 03:34:57.847175 systemd[1]: session-113.scope: Deactivated successfully. Oct 9 03:34:57.850353 systemd-logind[1475]: Session 113 logged out. Waiting for processes to exit. Oct 9 03:34:57.852995 systemd-logind[1475]: Removed session 113. Oct 9 03:35:03.009295 systemd[1]: Started sshd@115-188.245.48.63:22-139.178.68.195:37950.service - OpenSSH per-connection server daemon (139.178.68.195:37950). Oct 9 03:35:04.004496 sshd[8570]: Accepted publickey for core from 139.178.68.195 port 37950 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:04.007498 sshd[8570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:04.012106 systemd-logind[1475]: New session 114 of user core. Oct 9 03:35:04.015605 systemd[1]: Started session-114.scope - Session 114 of User core. Oct 9 03:35:04.829632 sshd[8570]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:04.833892 systemd[1]: sshd@115-188.245.48.63:22-139.178.68.195:37950.service: Deactivated successfully. Oct 9 03:35:04.836405 systemd[1]: session-114.scope: Deactivated successfully. Oct 9 03:35:04.837179 systemd-logind[1475]: Session 114 logged out. Waiting for processes to exit. Oct 9 03:35:04.838698 systemd-logind[1475]: Removed session 114. Oct 9 03:35:10.005168 systemd[1]: Started sshd@116-188.245.48.63:22-139.178.68.195:37956.service - OpenSSH per-connection server daemon (139.178.68.195:37956). Oct 9 03:35:11.010821 sshd[8607]: Accepted publickey for core from 139.178.68.195 port 37956 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:11.014114 sshd[8607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:11.022909 systemd-logind[1475]: New session 115 of user core. Oct 9 03:35:11.028664 systemd[1]: Started session-115.scope - Session 115 of User core. Oct 9 03:35:11.781496 sshd[8607]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:11.785255 systemd-logind[1475]: Session 115 logged out. Waiting for processes to exit. Oct 9 03:35:11.786370 systemd[1]: sshd@116-188.245.48.63:22-139.178.68.195:37956.service: Deactivated successfully. Oct 9 03:35:11.788772 systemd[1]: session-115.scope: Deactivated successfully. Oct 9 03:35:11.789940 systemd-logind[1475]: Removed session 115. Oct 9 03:35:12.013739 systemd[1]: Started sshd@117-188.245.48.63:22-119.147.211.61:60770.service - OpenSSH per-connection server daemon (119.147.211.61:60770). Oct 9 03:35:16.462652 sshd[8619]: Invalid user wqmarlduiqkmgs from 119.147.211.61 port 60770 Oct 9 03:35:16.464127 sshd[8619]: userauth_pubkey: parse publickey packet: incomplete message [preauth] Oct 9 03:35:16.467632 systemd[1]: sshd@117-188.245.48.63:22-119.147.211.61:60770.service: Deactivated successfully. Oct 9 03:35:16.956709 systemd[1]: Started sshd@118-188.245.48.63:22-139.178.68.195:39074.service - OpenSSH per-connection server daemon (139.178.68.195:39074). Oct 9 03:35:17.943010 sshd[8628]: Accepted publickey for core from 139.178.68.195 port 39074 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:17.944915 sshd[8628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:17.952949 systemd-logind[1475]: New session 116 of user core. Oct 9 03:35:17.956752 systemd[1]: Started session-116.scope - Session 116 of User core. Oct 9 03:35:18.688591 sshd[8628]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:18.691555 systemd[1]: sshd@118-188.245.48.63:22-139.178.68.195:39074.service: Deactivated successfully. Oct 9 03:35:18.693747 systemd[1]: session-116.scope: Deactivated successfully. Oct 9 03:35:18.695775 systemd-logind[1475]: Session 116 logged out. Waiting for processes to exit. Oct 9 03:35:18.697511 systemd-logind[1475]: Removed session 116. Oct 9 03:35:23.865780 systemd[1]: Started sshd@119-188.245.48.63:22-139.178.68.195:37366.service - OpenSSH per-connection server daemon (139.178.68.195:37366). Oct 9 03:35:24.860870 sshd[8658]: Accepted publickey for core from 139.178.68.195 port 37366 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:24.862539 sshd[8658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:24.867129 systemd-logind[1475]: New session 117 of user core. Oct 9 03:35:24.872606 systemd[1]: Started session-117.scope - Session 117 of User core. Oct 9 03:35:25.611181 sshd[8658]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:25.614797 systemd[1]: sshd@119-188.245.48.63:22-139.178.68.195:37366.service: Deactivated successfully. Oct 9 03:35:25.617545 systemd[1]: session-117.scope: Deactivated successfully. Oct 9 03:35:25.619619 systemd-logind[1475]: Session 117 logged out. Waiting for processes to exit. Oct 9 03:35:25.621022 systemd-logind[1475]: Removed session 117. Oct 9 03:35:30.785264 systemd[1]: Started sshd@120-188.245.48.63:22-139.178.68.195:48276.service - OpenSSH per-connection server daemon (139.178.68.195:48276). Oct 9 03:35:31.792958 sshd[8699]: Accepted publickey for core from 139.178.68.195 port 48276 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:31.795560 sshd[8699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:31.800036 systemd-logind[1475]: New session 118 of user core. Oct 9 03:35:31.805582 systemd[1]: Started session-118.scope - Session 118 of User core. Oct 9 03:35:32.567656 sshd[8699]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:32.570817 systemd[1]: sshd@120-188.245.48.63:22-139.178.68.195:48276.service: Deactivated successfully. Oct 9 03:35:32.572969 systemd[1]: session-118.scope: Deactivated successfully. Oct 9 03:35:32.575209 systemd-logind[1475]: Session 118 logged out. Waiting for processes to exit. Oct 9 03:35:32.576568 systemd-logind[1475]: Removed session 118. Oct 9 03:35:33.497706 systemd[1]: run-containerd-runc-k8s.io-fc6a5caa96513c8779dd93448efaf3628e28a54cca307e48c5a10a1e1dbd1a0b-runc.YmIR4j.mount: Deactivated successfully. Oct 9 03:35:37.753698 systemd[1]: Started sshd@121-188.245.48.63:22-139.178.68.195:48278.service - OpenSSH per-connection server daemon (139.178.68.195:48278). Oct 9 03:35:38.783372 sshd[8734]: Accepted publickey for core from 139.178.68.195 port 48278 ssh2: RSA SHA256:V6JTcJTAskVQsa2wAfoRarEDk+Z9SU1xYe0wRyPv9ZM Oct 9 03:35:38.785163 sshd[8734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 03:35:38.790102 systemd-logind[1475]: New session 119 of user core. Oct 9 03:35:38.797595 systemd[1]: Started session-119.scope - Session 119 of User core. Oct 9 03:35:39.574523 sshd[8734]: pam_unix(sshd:session): session closed for user core Oct 9 03:35:39.578191 systemd-logind[1475]: Session 119 logged out. Waiting for processes to exit. Oct 9 03:35:39.578922 systemd[1]: sshd@121-188.245.48.63:22-139.178.68.195:48278.service: Deactivated successfully. Oct 9 03:35:39.581213 systemd[1]: session-119.scope: Deactivated successfully. Oct 9 03:35:39.582056 systemd-logind[1475]: Removed session 119. Oct 9 03:35:55.478850 systemd[1]: cri-containerd-73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e.scope: Deactivated successfully. Oct 9 03:35:55.479497 systemd[1]: cri-containerd-73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e.scope: Consumed 11.181s CPU time, 27.2M memory peak, 0B memory swap peak. Oct 9 03:35:55.640511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e-rootfs.mount: Deactivated successfully. Oct 9 03:35:55.647346 containerd[1494]: time="2024-10-09T03:35:55.638756914Z" level=info msg="shim disconnected" id=73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e namespace=k8s.io Oct 9 03:35:55.647346 containerd[1494]: time="2024-10-09T03:35:55.647340165Z" level=warning msg="cleaning up after shim disconnected" id=73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e namespace=k8s.io Oct 9 03:35:55.648237 containerd[1494]: time="2024-10-09T03:35:55.647351576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 03:35:55.719667 systemd[1]: cri-containerd-3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2.scope: Deactivated successfully. Oct 9 03:35:55.720132 systemd[1]: cri-containerd-3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2.scope: Consumed 2.634s CPU time, 16.7M memory peak, 0B memory swap peak. Oct 9 03:35:55.743485 kubelet[2750]: E1009 03:35:55.742690 2750 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57326->10.0.0.2:2379: read: connection timed out" Oct 9 03:35:55.768938 containerd[1494]: time="2024-10-09T03:35:55.767551698Z" level=info msg="shim disconnected" id=3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2 namespace=k8s.io Oct 9 03:35:55.768938 containerd[1494]: time="2024-10-09T03:35:55.767596213Z" level=warning msg="cleaning up after shim disconnected" id=3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2 namespace=k8s.io Oct 9 03:35:55.768938 containerd[1494]: time="2024-10-09T03:35:55.767603627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 03:35:55.768773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2-rootfs.mount: Deactivated successfully. Oct 9 03:35:55.852596 kubelet[2750]: I1009 03:35:55.852558 2750 scope.go:117] "RemoveContainer" containerID="3d303407858076086dc63f4c5bf3bdb3d263119841b7cf14d5ec077abbce16b2" Oct 9 03:35:55.858800 kubelet[2750]: I1009 03:35:55.858760 2750 scope.go:117] "RemoveContainer" containerID="73a70486ce9c79d760d87dc6270c742aea9396bb00e2f6e52459e65630bd8f4e" Oct 9 03:35:55.881950 containerd[1494]: time="2024-10-09T03:35:55.881916168Z" level=info msg="CreateContainer within sandbox \"fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 9 03:35:55.882763 containerd[1494]: time="2024-10-09T03:35:55.882728571Z" level=info msg="CreateContainer within sandbox \"f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 9 03:35:55.917568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825471693.mount: Deactivated successfully. Oct 9 03:35:55.930230 containerd[1494]: time="2024-10-09T03:35:55.930177378Z" level=info msg="CreateContainer within sandbox \"fb83512069e4356f04b04790c89569e61250b013c4d9ff542ae39bfbff0ab1d8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a7f4b05c4477fa8924c8d26e7585dc9cc018ef4a0a6ed408bffe7e18ce43e392\"" Oct 9 03:35:55.932091 containerd[1494]: time="2024-10-09T03:35:55.931090933Z" level=info msg="CreateContainer within sandbox \"f5265afc65e65e507dbfc56b2dbc61aaaf1ec5b5feb7ccf19238b298ed7768e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"61c59a5140a9200bc71894fb8b7e21ed263fd19577266a32810faa8a5505c07c\"" Oct 9 03:35:55.933491 containerd[1494]: time="2024-10-09T03:35:55.933415406Z" level=info msg="StartContainer for \"61c59a5140a9200bc71894fb8b7e21ed263fd19577266a32810faa8a5505c07c\"" Oct 9 03:35:55.935211 containerd[1494]: time="2024-10-09T03:35:55.935193853Z" level=info msg="StartContainer for \"a7f4b05c4477fa8924c8d26e7585dc9cc018ef4a0a6ed408bffe7e18ce43e392\"" Oct 9 03:35:55.972606 systemd[1]: Started cri-containerd-61c59a5140a9200bc71894fb8b7e21ed263fd19577266a32810faa8a5505c07c.scope - libcontainer container 61c59a5140a9200bc71894fb8b7e21ed263fd19577266a32810faa8a5505c07c. Oct 9 03:35:55.982604 systemd[1]: Started cri-containerd-a7f4b05c4477fa8924c8d26e7585dc9cc018ef4a0a6ed408bffe7e18ce43e392.scope - libcontainer container a7f4b05c4477fa8924c8d26e7585dc9cc018ef4a0a6ed408bffe7e18ce43e392. Oct 9 03:35:56.039683 containerd[1494]: time="2024-10-09T03:35:56.039399632Z" level=info msg="StartContainer for \"a7f4b05c4477fa8924c8d26e7585dc9cc018ef4a0a6ed408bffe7e18ce43e392\" returns successfully" Oct 9 03:35:56.053055 containerd[1494]: time="2024-10-09T03:35:56.052897874Z" level=info msg="StartContainer for \"61c59a5140a9200bc71894fb8b7e21ed263fd19577266a32810faa8a5505c07c\" returns successfully" Oct 9 03:35:56.162424 systemd[1]: cri-containerd-5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8.scope: Deactivated successfully. Oct 9 03:35:56.163037 systemd[1]: cri-containerd-5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8.scope: Consumed 8.908s CPU time. Oct 9 03:35:56.193901 containerd[1494]: time="2024-10-09T03:35:56.193686333Z" level=info msg="shim disconnected" id=5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8 namespace=k8s.io Oct 9 03:35:56.194009 containerd[1494]: time="2024-10-09T03:35:56.193901695Z" level=warning msg="cleaning up after shim disconnected" id=5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8 namespace=k8s.io Oct 9 03:35:56.194009 containerd[1494]: time="2024-10-09T03:35:56.193915521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 03:35:56.223954 containerd[1494]: time="2024-10-09T03:35:56.223908261Z" level=warning msg="cleanup warnings time=\"2024-10-09T03:35:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 03:35:56.646564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452507735.mount: Deactivated successfully. Oct 9 03:35:56.647044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8-rootfs.mount: Deactivated successfully. Oct 9 03:35:56.868209 kubelet[2750]: I1009 03:35:56.868178 2750 scope.go:117] "RemoveContainer" containerID="5488cb7dfea1593f8d57b419694c343c0bb0e55a4d1da05230f02887572f76b8" Oct 9 03:35:56.884312 containerd[1494]: time="2024-10-09T03:35:56.884260541Z" level=info msg="CreateContainer within sandbox \"4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 9 03:35:56.898896 containerd[1494]: time="2024-10-09T03:35:56.897564954Z" level=info msg="CreateContainer within sandbox \"4d7d0ab607685b7a6fd98ab210ccda7ee55d77e5529b12fe8d252ef7e3075512\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b460f4547f7c958675efa95e567aee891ef26afa8e9bb41ea7d81c393ed0ba24\"" Oct 9 03:35:56.898896 containerd[1494]: time="2024-10-09T03:35:56.897836433Z" level=info msg="StartContainer for \"b460f4547f7c958675efa95e567aee891ef26afa8e9bb41ea7d81c393ed0ba24\"" Oct 9 03:35:56.934548 systemd[1]: Started cri-containerd-b460f4547f7c958675efa95e567aee891ef26afa8e9bb41ea7d81c393ed0ba24.scope - libcontainer container b460f4547f7c958675efa95e567aee891ef26afa8e9bb41ea7d81c393ed0ba24. Oct 9 03:35:56.964185 containerd[1494]: time="2024-10-09T03:35:56.964134625Z" level=info msg="StartContainer for \"b460f4547f7c958675efa95e567aee891ef26afa8e9bb41ea7d81c393ed0ba24\" returns successfully" Oct 9 03:36:00.480017 kubelet[2750]: E1009 03:36:00.479970 2750 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57124->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4116-0-0-d-cd8c2d08d9.17fcab863745d7e7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4116-0-0-d-cd8c2d08d9,UID:bc7282938b0396fbde40e88ad9bc4683,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-d-cd8c2d08d9,},FirstTimestamp:2024-10-09 03:35:49.944719335 +0000 UTC m=+918.303630102,LastTimestamp:2024-10-09 03:35:49.944719335 +0000 UTC m=+918.303630102,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-d-cd8c2d08d9,}"