Oct 8 20:37:40.888311 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:37:40.888332 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:37:40.888340 kernel: BIOS-provided physical RAM map: Oct 8 20:37:40.888346 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:37:40.888351 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:37:40.888356 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:37:40.888362 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 8 20:37:40.888368 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 8 20:37:40.888375 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 20:37:40.888380 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 20:37:40.888385 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:37:40.888390 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:37:40.888395 kernel: NX (Execute Disable) protection: active Oct 8 20:37:40.888401 kernel: APIC: Static calls initialized Oct 8 20:37:40.888409 kernel: SMBIOS 2.8 present. Oct 8 20:37:40.888415 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 8 20:37:40.888421 kernel: Hypervisor detected: KVM Oct 8 20:37:40.888426 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:37:40.888431 kernel: kvm-clock: using sched offset of 2657832107 cycles Oct 8 20:37:40.888437 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:37:40.888443 kernel: tsc: Detected 2445.406 MHz processor Oct 8 20:37:40.888449 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:37:40.888455 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:37:40.888463 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 8 20:37:40.888468 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:37:40.888474 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:37:40.888479 kernel: Using GB pages for direct mapping Oct 8 20:37:40.888485 kernel: ACPI: Early table checksum verification disabled Oct 8 20:37:40.888490 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 8 20:37:40.888496 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888502 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888507 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888515 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 8 20:37:40.888520 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888526 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888532 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888537 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:37:40.888543 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 8 20:37:40.888549 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 8 20:37:40.888555 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 8 20:37:40.888565 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 8 20:37:40.888571 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 8 20:37:40.888577 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 8 20:37:40.888583 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 8 20:37:40.888589 kernel: No NUMA configuration found Oct 8 20:37:40.888595 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 8 20:37:40.888602 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 8 20:37:40.888608 kernel: Zone ranges: Oct 8 20:37:40.888614 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:37:40.888633 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 8 20:37:40.889086 kernel: Normal empty Oct 8 20:37:40.889099 kernel: Movable zone start for each node Oct 8 20:37:40.889110 kernel: Early memory node ranges Oct 8 20:37:40.889119 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:37:40.889124 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 8 20:37:40.889130 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 8 20:37:40.889140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:37:40.889146 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:37:40.889152 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 20:37:40.889158 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:37:40.889164 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:37:40.889170 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:37:40.889176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:37:40.889182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:37:40.889187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:37:40.889195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:37:40.889201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:37:40.889207 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:37:40.889213 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 20:37:40.889219 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:37:40.889225 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:37:40.889231 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 20:37:40.889237 kernel: Booting paravirtualized kernel on KVM Oct 8 20:37:40.889243 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:37:40.889251 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:37:40.889257 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:37:40.889263 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:37:40.889268 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:37:40.889274 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:37:40.889281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:37:40.889288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:37:40.889293 kernel: random: crng init done Oct 8 20:37:40.889301 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:37:40.889307 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:37:40.889313 kernel: Fallback order for Node 0: 0 Oct 8 20:37:40.889319 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 8 20:37:40.889325 kernel: Policy zone: DMA32 Oct 8 20:37:40.889330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:37:40.889337 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:37:40.889343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:37:40.889348 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:37:40.889356 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:37:40.889362 kernel: Dynamic Preempt: voluntary Oct 8 20:37:40.889368 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:37:40.889374 kernel: rcu: RCU event tracing is enabled. Oct 8 20:37:40.889381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:37:40.889387 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:37:40.889393 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:37:40.889399 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:37:40.889405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:37:40.889411 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:37:40.889419 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:37:40.889425 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:37:40.889431 kernel: Console: colour VGA+ 80x25 Oct 8 20:37:40.889437 kernel: printk: console [tty0] enabled Oct 8 20:37:40.889442 kernel: printk: console [ttyS0] enabled Oct 8 20:37:40.889449 kernel: ACPI: Core revision 20230628 Oct 8 20:37:40.889455 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 20:37:40.889460 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:37:40.889466 kernel: x2apic enabled Oct 8 20:37:40.889474 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:37:40.889480 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:37:40.889486 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:37:40.889492 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Oct 8 20:37:40.889498 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 20:37:40.889503 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 20:37:40.889509 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 20:37:40.889516 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:37:40.889530 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:37:40.889536 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:37:40.889542 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:37:40.889551 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 20:37:40.889557 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 20:37:40.889563 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:37:40.889569 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:37:40.889575 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 20:37:40.889582 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 20:37:40.889588 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 20:37:40.889595 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:37:40.889603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:37:40.889609 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:37:40.889615 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:37:40.889653 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 20:37:40.889660 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:37:40.889669 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:37:40.889675 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:37:40.889681 kernel: landlock: Up and running. Oct 8 20:37:40.889687 kernel: SELinux: Initializing. Oct 8 20:37:40.889693 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:37:40.889699 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:37:40.889706 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 20:37:40.889712 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:37:40.889718 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:37:40.889726 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:37:40.889732 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 20:37:40.889739 kernel: ... version: 0 Oct 8 20:37:40.889745 kernel: ... bit width: 48 Oct 8 20:37:40.889751 kernel: ... generic registers: 6 Oct 8 20:37:40.889757 kernel: ... value mask: 0000ffffffffffff Oct 8 20:37:40.889763 kernel: ... max period: 00007fffffffffff Oct 8 20:37:40.889769 kernel: ... fixed-purpose events: 0 Oct 8 20:37:40.889775 kernel: ... event mask: 000000000000003f Oct 8 20:37:40.889781 kernel: signal: max sigframe size: 1776 Oct 8 20:37:40.889790 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:37:40.889796 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:37:40.889802 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:37:40.889808 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:37:40.889814 kernel: .... node #0, CPUs: #1 Oct 8 20:37:40.889820 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:37:40.889826 kernel: smpboot: Max logical packages: 1 Oct 8 20:37:40.889833 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Oct 8 20:37:40.889839 kernel: devtmpfs: initialized Oct 8 20:37:40.889847 kernel: x86/mm: Memory block size: 128MB Oct 8 20:37:40.889853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:37:40.889859 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:37:40.889865 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:37:40.889872 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:37:40.889878 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:37:40.889884 kernel: audit: type=2000 audit(1728419860.152:1): state=initialized audit_enabled=0 res=1 Oct 8 20:37:40.889890 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:37:40.889896 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:37:40.889904 kernel: cpuidle: using governor menu Oct 8 20:37:40.889911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:37:40.889917 kernel: dca service started, version 1.12.1 Oct 8 20:37:40.889923 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 20:37:40.889929 kernel: PCI: Using configuration type 1 for base access Oct 8 20:37:40.889936 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:37:40.889942 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:37:40.889948 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:37:40.889954 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:37:40.889962 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:37:40.889968 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:37:40.889974 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:37:40.889981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:37:40.889987 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:37:40.889993 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:37:40.889999 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:37:40.890005 kernel: ACPI: Interpreter enabled Oct 8 20:37:40.890011 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:37:40.890019 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:37:40.890025 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:37:40.890032 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:37:40.890038 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 20:37:40.890044 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:37:40.890201 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:37:40.890317 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 20:37:40.890426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 20:37:40.890435 kernel: PCI host bridge to bus 0000:00 Oct 8 20:37:40.890541 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:37:40.892727 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:37:40.892839 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:37:40.892937 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 8 20:37:40.893031 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 20:37:40.893132 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 20:37:40.893226 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:37:40.893345 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 20:37:40.893461 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:37:40.893594 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 8 20:37:40.893729 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 8 20:37:40.893836 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 8 20:37:40.893944 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 8 20:37:40.894047 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:37:40.894165 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.894269 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 8 20:37:40.894383 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.894488 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 8 20:37:40.894616 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.896371 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 8 20:37:40.896490 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.896596 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 8 20:37:40.898751 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.898864 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 8 20:37:40.899020 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.899184 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 8 20:37:40.899299 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.899402 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 8 20:37:40.899512 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.899615 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 8 20:37:40.899762 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:37:40.899868 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 8 20:37:40.899983 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 20:37:40.900086 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 20:37:40.900219 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 20:37:40.900325 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 8 20:37:40.900428 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 8 20:37:40.900543 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 20:37:40.901735 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 20:37:40.901862 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:37:40.902006 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 8 20:37:40.902122 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 8 20:37:40.902232 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 8 20:37:40.902344 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:37:40.902448 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:37:40.902552 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:37:40.903708 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 20:37:40.903826 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 8 20:37:40.903936 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:37:40.904039 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:37:40.904148 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:37:40.904284 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 20:37:40.904395 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 8 20:37:40.904503 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 8 20:37:40.904604 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:37:40.905757 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:37:40.905867 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:37:40.905987 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 20:37:40.906096 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 8 20:37:40.906200 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:37:40.906303 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:37:40.906405 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:37:40.906527 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 20:37:40.907733 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 8 20:37:40.907849 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:37:40.907953 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:37:40.908055 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:37:40.908191 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 20:37:40.908311 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 8 20:37:40.908418 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 8 20:37:40.908523 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:37:40.909659 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:37:40.909769 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:37:40.909778 kernel: acpiphp: Slot [0] registered Oct 8 20:37:40.909893 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:37:40.910001 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 8 20:37:40.910110 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 8 20:37:40.910216 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 8 20:37:40.910321 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:37:40.910429 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:37:40.910532 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:37:40.910541 kernel: acpiphp: Slot [0-2] registered Oct 8 20:37:40.910661 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:37:40.910767 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:37:40.910869 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:37:40.910878 kernel: acpiphp: Slot [0-3] registered Oct 8 20:37:40.910986 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:37:40.911094 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:37:40.911198 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:37:40.911207 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:37:40.911213 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:37:40.911220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:37:40.911226 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:37:40.911232 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 20:37:40.911238 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 20:37:40.911245 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 20:37:40.911254 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 20:37:40.911261 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 20:37:40.911267 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 20:37:40.911273 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 20:37:40.911280 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 20:37:40.911286 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 20:37:40.911292 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 20:37:40.911299 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 20:37:40.911305 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 20:37:40.911313 kernel: iommu: Default domain type: Translated Oct 8 20:37:40.911320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:37:40.911326 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:37:40.911332 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:37:40.911339 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:37:40.911345 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 8 20:37:40.911450 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 20:37:40.911553 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 20:37:40.913713 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:37:40.913746 kernel: vgaarb: loaded Oct 8 20:37:40.913754 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 20:37:40.913761 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 20:37:40.913768 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:37:40.913774 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:37:40.913781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:37:40.913787 kernel: pnp: PnP ACPI init Oct 8 20:37:40.913905 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 20:37:40.913920 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:37:40.913927 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:37:40.913933 kernel: NET: Registered PF_INET protocol family Oct 8 20:37:40.913940 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:37:40.913946 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:37:40.913953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:37:40.913959 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:37:40.913965 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:37:40.913972 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:37:40.913980 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:37:40.913987 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:37:40.913993 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:37:40.913999 kernel: NET: Registered PF_XDP protocol family Oct 8 20:37:40.914105 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 20:37:40.914210 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 20:37:40.914313 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 20:37:40.914422 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 20:37:40.914526 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 20:37:40.915664 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 20:37:40.915835 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:37:40.916002 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:37:40.916193 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:37:40.916367 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:37:40.916559 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:37:40.918714 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:37:40.918828 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:37:40.918932 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:37:40.919034 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:37:40.919137 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:37:40.919260 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:37:40.919365 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:37:40.919473 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:37:40.919595 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:37:40.919717 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:37:40.919983 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:37:40.920325 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:37:40.920433 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:37:40.922762 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:37:40.922880 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 8 20:37:40.923002 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:37:40.923125 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:37:40.923258 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:37:40.923385 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 8 20:37:40.923490 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:37:40.923592 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:37:40.923735 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:37:40.923839 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 8 20:37:40.923942 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:37:40.924051 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:37:40.924151 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:37:40.924268 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:37:40.924364 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:37:40.924464 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 8 20:37:40.924560 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 20:37:40.924680 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 20:37:40.924793 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 8 20:37:40.924937 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:37:40.925063 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 8 20:37:40.925171 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:37:40.925278 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 8 20:37:40.925378 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:37:40.925485 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 8 20:37:40.925586 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:37:40.925750 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 8 20:37:40.925851 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:37:40.925962 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 8 20:37:40.926062 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:37:40.926167 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 8 20:37:40.926266 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 8 20:37:40.926363 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:37:40.926469 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 8 20:37:40.926573 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 8 20:37:40.926708 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:37:40.926819 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 8 20:37:40.926919 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 8 20:37:40.927016 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:37:40.927027 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 20:37:40.927034 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:37:40.927045 kernel: Initialise system trusted keyrings Oct 8 20:37:40.927051 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:37:40.927058 kernel: Key type asymmetric registered Oct 8 20:37:40.927065 kernel: Asymmetric key parser 'x509' registered Oct 8 20:37:40.927072 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:37:40.927078 kernel: io scheduler mq-deadline registered Oct 8 20:37:40.927085 kernel: io scheduler kyber registered Oct 8 20:37:40.927092 kernel: io scheduler bfq registered Oct 8 20:37:40.927214 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 8 20:37:40.927327 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 8 20:37:40.927430 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 8 20:37:40.927534 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 8 20:37:40.927702 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 8 20:37:40.927810 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 8 20:37:40.927912 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 8 20:37:40.928014 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 8 20:37:40.928118 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 8 20:37:40.928241 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 8 20:37:40.928358 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 8 20:37:40.928460 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 8 20:37:40.928564 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 8 20:37:40.928694 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 8 20:37:40.928801 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 8 20:37:40.928904 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 8 20:37:40.928913 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 20:37:40.929014 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 8 20:37:40.929123 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 8 20:37:40.929132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:37:40.929139 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 8 20:37:40.929146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:37:40.929152 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:37:40.929159 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:37:40.929166 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:37:40.929172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:37:40.929279 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 8 20:37:40.929292 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:37:40.929389 kernel: rtc_cmos 00:03: registered as rtc0 Oct 8 20:37:40.929491 kernel: rtc_cmos 00:03: setting system clock to 2024-10-08T20:37:40 UTC (1728419860) Oct 8 20:37:40.929588 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 20:37:40.929597 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:37:40.929604 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:37:40.929610 kernel: Segment Routing with IPv6 Oct 8 20:37:40.929617 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:37:40.929678 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:37:40.929685 kernel: Key type dns_resolver registered Oct 8 20:37:40.929691 kernel: IPI shorthand broadcast: enabled Oct 8 20:37:40.929698 kernel: sched_clock: Marking stable (1127010862, 135414356)->(1273073663, -10648445) Oct 8 20:37:40.929704 kernel: registered taskstats version 1 Oct 8 20:37:40.929711 kernel: Loading compiled-in X.509 certificates Oct 8 20:37:40.929718 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:37:40.929725 kernel: Key type .fscrypt registered Oct 8 20:37:40.929731 kernel: Key type fscrypt-provisioning registered Oct 8 20:37:40.929740 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:37:40.929747 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:37:40.929753 kernel: ima: No architecture policies found Oct 8 20:37:40.929760 kernel: clk: Disabling unused clocks Oct 8 20:37:40.929767 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:37:40.929773 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:37:40.929780 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:37:40.929786 kernel: Run /init as init process Oct 8 20:37:40.929795 kernel: with arguments: Oct 8 20:37:40.929802 kernel: /init Oct 8 20:37:40.929808 kernel: with environment: Oct 8 20:37:40.929815 kernel: HOME=/ Oct 8 20:37:40.929821 kernel: TERM=linux Oct 8 20:37:40.929827 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:37:40.929837 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:37:40.929846 systemd[1]: Detected virtualization kvm. Oct 8 20:37:40.929855 systemd[1]: Detected architecture x86-64. Oct 8 20:37:40.929862 systemd[1]: Running in initrd. Oct 8 20:37:40.929869 systemd[1]: No hostname configured, using default hostname. Oct 8 20:37:40.929875 systemd[1]: Hostname set to . Oct 8 20:37:40.929883 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:37:40.929889 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:37:40.929897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:37:40.929904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:37:40.929913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:37:40.929920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:37:40.929927 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:37:40.929935 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:37:40.929943 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:37:40.929950 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:37:40.929957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:37:40.929967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:37:40.929974 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:37:40.929981 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:37:40.929988 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:37:40.929994 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:37:40.930001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:37:40.930008 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:37:40.930015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:37:40.930022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:37:40.930032 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:37:40.930039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:37:40.930045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:37:40.930053 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:37:40.930060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:37:40.930067 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:37:40.930074 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:37:40.930081 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:37:40.930090 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:37:40.930097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:37:40.930104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:40.930111 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:37:40.930141 systemd-journald[187]: Collecting audit messages is disabled. Oct 8 20:37:40.930161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:37:40.930168 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:37:40.930176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:37:40.930183 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:37:40.930192 kernel: Bridge firewalling registered Oct 8 20:37:40.930199 systemd-journald[187]: Journal started Oct 8 20:37:40.930215 systemd-journald[187]: Runtime Journal (/run/log/journal/b5a29682dae146dbad6b1fb1c5728a1c) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:37:40.898025 systemd-modules-load[188]: Inserted module 'overlay' Oct 8 20:37:40.925005 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 8 20:37:40.955652 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:37:40.956247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:37:40.957865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:40.959200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:37:40.966806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:37:40.968646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:37:40.970775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:37:40.980824 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:37:40.991289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:37:40.995437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:37:41.001765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:37:41.002586 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:37:41.005370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:37:41.012786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:37:41.024147 dracut-cmdline[224]: dracut-dracut-053 Oct 8 20:37:41.026374 systemd-resolved[220]: Positive Trust Anchors: Oct 8 20:37:41.026392 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:37:41.028363 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:37:41.026417 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:37:41.028511 systemd-resolved[220]: Defaulting to hostname 'linux'. Oct 8 20:37:41.029899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:37:41.033741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:37:41.091650 kernel: SCSI subsystem initialized Oct 8 20:37:41.100642 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:37:41.109655 kernel: iscsi: registered transport (tcp) Oct 8 20:37:41.127652 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:37:41.127691 kernel: QLogic iSCSI HBA Driver Oct 8 20:37:41.163545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:37:41.169740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:37:41.190994 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:37:41.191029 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:37:41.192593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:37:41.230651 kernel: raid6: avx2x4 gen() 35244 MB/s Oct 8 20:37:41.247645 kernel: raid6: avx2x2 gen() 30748 MB/s Oct 8 20:37:41.264705 kernel: raid6: avx2x1 gen() 26219 MB/s Oct 8 20:37:41.264752 kernel: raid6: using algorithm avx2x4 gen() 35244 MB/s Oct 8 20:37:41.282800 kernel: raid6: .... xor() 4344 MB/s, rmw enabled Oct 8 20:37:41.282841 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:37:41.301667 kernel: xor: automatically using best checksumming function avx Oct 8 20:37:41.419653 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:37:41.430741 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:37:41.438739 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:37:41.452913 systemd-udevd[406]: Using default interface naming scheme 'v255'. Oct 8 20:37:41.456735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:37:41.464797 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:37:41.475897 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Oct 8 20:37:41.502003 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:37:41.508750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:37:41.571187 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:37:41.576748 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:37:41.590455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:37:41.592268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:37:41.593911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:37:41.595155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:37:41.601784 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:37:41.611480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:37:41.720647 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:37:41.725867 kernel: ACPI: bus type USB registered Oct 8 20:37:41.725897 kernel: usbcore: registered new interface driver usbfs Oct 8 20:37:41.730685 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:37:41.735937 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:37:41.739902 kernel: AES CTR mode by8 optimization enabled Oct 8 20:37:41.741638 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 20:37:41.741681 kernel: libata version 3.00 loaded. Oct 8 20:37:41.745696 kernel: usbcore: registered new interface driver hub Oct 8 20:37:41.749689 kernel: usbcore: registered new device driver usb Oct 8 20:37:41.760810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:37:41.760938 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:37:41.762692 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:37:41.763712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:37:41.764727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:41.766648 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:41.773883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:41.796640 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 20:37:41.798643 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 20:37:41.801685 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 20:37:41.801842 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 20:37:41.805507 kernel: scsi host1: ahci Oct 8 20:37:41.805690 kernel: scsi host2: ahci Oct 8 20:37:41.805828 kernel: scsi host3: ahci Oct 8 20:37:41.808748 kernel: scsi host4: ahci Oct 8 20:37:41.810693 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 8 20:37:41.810866 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 20:37:41.811004 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 8 20:37:41.811132 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 8 20:37:41.811263 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:37:41.811391 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:37:41.811527 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 20:37:41.811677 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 20:37:41.813639 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:37:41.813790 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 20:37:41.813921 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 20:37:41.814047 kernel: hub 1-0:1.0: USB hub found Oct 8 20:37:41.814198 kernel: hub 1-0:1.0: 4 ports detected Oct 8 20:37:41.814331 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:37:41.814341 kernel: GPT:17805311 != 80003071 Oct 8 20:37:41.814349 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:37:41.814357 kernel: GPT:17805311 != 80003071 Oct 8 20:37:41.814365 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:37:41.814373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:37:41.814382 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 8 20:37:41.814525 kernel: scsi host5: ahci Oct 8 20:37:41.816662 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 20:37:41.816816 kernel: hub 2-0:1.0: USB hub found Oct 8 20:37:41.816953 kernel: hub 2-0:1.0: 4 ports detected Oct 8 20:37:41.825809 kernel: scsi host6: ahci Oct 8 20:37:41.826072 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 8 20:37:41.826090 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 8 20:37:41.826105 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 8 20:37:41.826127 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 8 20:37:41.826141 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 8 20:37:41.826155 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 8 20:37:41.863877 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 20:37:41.895698 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (466) Oct 8 20:37:41.895718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (451) Oct 8 20:37:41.893399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:41.902376 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 20:37:41.910097 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 20:37:41.911446 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 20:37:41.916053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:37:41.921804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:37:41.924748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:37:41.928301 disk-uuid[566]: Primary Header is updated. Oct 8 20:37:41.928301 disk-uuid[566]: Secondary Entries is updated. Oct 8 20:37:41.928301 disk-uuid[566]: Secondary Header is updated. Oct 8 20:37:41.933649 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:37:41.938789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:37:41.952370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:37:42.055839 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 20:37:42.142283 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 20:37:42.142344 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 20:37:42.142357 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 8 20:37:42.142367 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 20:37:42.142386 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 20:37:42.142395 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 20:37:42.143653 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 20:37:42.144718 kernel: ata1.00: applying bridge limits Oct 8 20:37:42.146134 kernel: ata1.00: configured for UDMA/100 Oct 8 20:37:42.148660 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:37:42.186740 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 20:37:42.187001 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:37:42.193666 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:37:42.197796 kernel: usbcore: registered new interface driver usbhid Oct 8 20:37:42.197823 kernel: usbhid: USB HID core driver Oct 8 20:37:42.197848 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:37:42.203150 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 8 20:37:42.203178 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 20:37:42.942773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:37:42.944283 disk-uuid[567]: The operation has completed successfully. Oct 8 20:37:43.002207 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:37:43.002352 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:37:43.016766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:37:43.020387 sh[593]: Success Oct 8 20:37:43.032776 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 20:37:43.075085 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:37:43.086741 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:37:43.087437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:37:43.103874 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:37:43.103909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:37:43.106502 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:37:43.106517 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:37:43.108685 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:37:43.114639 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:37:43.116499 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:37:43.117508 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:37:43.128774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:37:43.131787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:37:43.144087 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:37:43.144123 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:37:43.145713 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:37:43.153924 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:37:43.153963 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:37:43.168238 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:37:43.167887 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:37:43.174829 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:37:43.183782 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:37:43.233750 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:37:43.240796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:37:43.266046 ignition[699]: Ignition 2.19.0 Oct 8 20:37:43.266056 ignition[699]: Stage: fetch-offline Oct 8 20:37:43.266087 ignition[699]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:43.266096 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:43.266189 ignition[699]: parsed url from cmdline: "" Oct 8 20:37:43.266192 ignition[699]: no config URL provided Oct 8 20:37:43.266197 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:37:43.268862 systemd-networkd[774]: lo: Link UP Oct 8 20:37:43.266206 ignition[699]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:37:43.268867 systemd-networkd[774]: lo: Gained carrier Oct 8 20:37:43.266210 ignition[699]: failed to fetch config: resource requires networking Oct 8 20:37:43.270325 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:37:43.266716 ignition[699]: Ignition finished successfully Oct 8 20:37:43.273885 systemd-networkd[774]: Enumeration completed Oct 8 20:37:43.274207 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:37:43.275243 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:43.275248 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:37:43.275639 systemd[1]: Reached target network.target - Network. Oct 8 20:37:43.277321 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:43.277325 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:37:43.278433 systemd-networkd[774]: eth0: Link UP Oct 8 20:37:43.278436 systemd-networkd[774]: eth0: Gained carrier Oct 8 20:37:43.278444 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:43.281423 systemd-networkd[774]: eth1: Link UP Oct 8 20:37:43.281427 systemd-networkd[774]: eth1: Gained carrier Oct 8 20:37:43.281433 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:43.281812 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:37:43.294757 ignition[782]: Ignition 2.19.0 Oct 8 20:37:43.294767 ignition[782]: Stage: fetch Oct 8 20:37:43.294894 ignition[782]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:43.294904 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:43.294994 ignition[782]: parsed url from cmdline: "" Oct 8 20:37:43.295001 ignition[782]: no config URL provided Oct 8 20:37:43.295005 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:37:43.295014 ignition[782]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:37:43.295029 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 20:37:43.295158 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 20:37:43.318656 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:37:43.397701 systemd-networkd[774]: eth0: DHCPv4 address 91.107.220.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:37:43.496276 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 20:37:43.500306 ignition[782]: GET result: OK Oct 8 20:37:43.500388 ignition[782]: parsing config with SHA512: 1e540c408ac56a0f87f16a69b74f09be4fd4be0f9de794a0e826e1a6b968cdcdd86fb01b266cf682cefe3df54afdacda98a479e5c0b3f10225ad2bbb385f1839 Oct 8 20:37:43.504049 unknown[782]: fetched base config from "system" Oct 8 20:37:43.504603 unknown[782]: fetched base config from "system" Oct 8 20:37:43.504611 unknown[782]: fetched user config from "hetzner" Oct 8 20:37:43.504960 ignition[782]: fetch: fetch complete Oct 8 20:37:43.504966 ignition[782]: fetch: fetch passed Oct 8 20:37:43.505014 ignition[782]: Ignition finished successfully Oct 8 20:37:43.508227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:37:43.513774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:37:43.530521 ignition[790]: Ignition 2.19.0 Oct 8 20:37:43.530546 ignition[790]: Stage: kargs Oct 8 20:37:43.531338 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:43.531354 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:43.532193 ignition[790]: kargs: kargs passed Oct 8 20:37:43.533739 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:37:43.532240 ignition[790]: Ignition finished successfully Oct 8 20:37:43.541816 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:37:43.553792 ignition[796]: Ignition 2.19.0 Oct 8 20:37:43.553804 ignition[796]: Stage: disks Oct 8 20:37:43.553970 ignition[796]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:43.553982 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:43.555021 ignition[796]: disks: disks passed Oct 8 20:37:43.555069 ignition[796]: Ignition finished successfully Oct 8 20:37:43.556884 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:37:43.558253 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:37:43.559618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:37:43.560267 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:37:43.561329 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:37:43.562205 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:37:43.567726 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:37:43.580678 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:37:43.582737 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:37:43.591718 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:37:43.669665 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:37:43.669798 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:37:43.670735 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:37:43.676681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:37:43.678711 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:37:43.682842 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:37:43.683444 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:37:43.683472 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:37:43.689871 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:37:43.697643 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Oct 8 20:37:43.697663 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:37:43.697673 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:37:43.697682 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:37:43.699908 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:37:43.699931 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:37:43.705802 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:37:43.709410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:37:43.746756 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:37:43.750907 coreos-metadata[814]: Oct 08 20:37:43.750 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 20:37:43.753638 coreos-metadata[814]: Oct 08 20:37:43.752 INFO Fetch successful Oct 8 20:37:43.753638 coreos-metadata[814]: Oct 08 20:37:43.752 INFO wrote hostname ci-4081-1-0-a-d0274495d1 to /sysroot/etc/hostname Oct 8 20:37:43.755728 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:37:43.754406 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:37:43.759594 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:37:43.763075 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:37:43.840483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:37:43.848725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:37:43.852492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:37:43.860643 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:37:43.876400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:37:43.880571 ignition[929]: INFO : Ignition 2.19.0 Oct 8 20:37:43.881316 ignition[929]: INFO : Stage: mount Oct 8 20:37:43.881749 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:43.881749 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:43.882886 ignition[929]: INFO : mount: mount passed Oct 8 20:37:43.882886 ignition[929]: INFO : Ignition finished successfully Oct 8 20:37:43.883436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:37:43.890698 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:37:44.102937 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:37:44.105799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:37:44.119646 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Oct 8 20:37:44.122995 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:37:44.123030 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:37:44.125516 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:37:44.130486 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:37:44.130517 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:37:44.133114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:37:44.158017 ignition[957]: INFO : Ignition 2.19.0 Oct 8 20:37:44.158017 ignition[957]: INFO : Stage: files Oct 8 20:37:44.159301 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:44.159301 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:44.159301 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:37:44.161711 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:37:44.161711 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:37:44.163912 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:37:44.165055 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:37:44.165055 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:37:44.164447 unknown[957]: wrote ssh authorized keys file for user: core Oct 8 20:37:44.167406 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:37:44.167406 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:37:44.280213 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:37:44.322754 systemd-networkd[774]: eth0: Gained IPv6LL Oct 8 20:37:44.386708 systemd-networkd[774]: eth1: Gained IPv6LL Oct 8 20:37:44.553557 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:37:44.553557 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:37:44.555740 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 8 20:37:45.097950 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:37:45.365379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 8 20:37:45.365379 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:37:45.367790 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:37:45.367790 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:37:45.367790 ignition[957]: INFO : files: files passed Oct 8 20:37:45.367790 ignition[957]: INFO : Ignition finished successfully Oct 8 20:37:45.368914 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:37:45.378379 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:37:45.382747 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:37:45.384190 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:37:45.384798 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:37:45.396400 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:37:45.396400 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:37:45.398981 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:37:45.400201 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:37:45.401198 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:37:45.406849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:37:45.435760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:37:45.435889 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:37:45.437688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:37:45.438833 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:37:45.439984 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:37:45.446789 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:37:45.458922 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:37:45.464765 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:37:45.473047 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:37:45.474235 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:37:45.474847 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:37:45.475883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:37:45.475998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:37:45.477127 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:37:45.477790 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:37:45.478719 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:37:45.479655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:37:45.480576 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:37:45.481636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:37:45.482681 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:37:45.483793 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:37:45.484722 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:37:45.485757 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:37:45.486751 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:37:45.486847 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:37:45.487960 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:37:45.488598 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:37:45.489668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:37:45.489770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:37:45.490819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:37:45.490910 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:37:45.492312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:37:45.492413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:37:45.493093 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:37:45.493224 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:37:45.493957 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:37:45.494087 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:37:45.505031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:37:45.505543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:37:45.505711 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:37:45.508786 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:37:45.509236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:37:45.509374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:37:45.509978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:37:45.510110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:37:45.516739 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:37:45.516832 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:37:45.527498 ignition[1010]: INFO : Ignition 2.19.0 Oct 8 20:37:45.532949 ignition[1010]: INFO : Stage: umount Oct 8 20:37:45.532949 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:37:45.532949 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:37:45.532949 ignition[1010]: INFO : umount: umount passed Oct 8 20:37:45.532949 ignition[1010]: INFO : Ignition finished successfully Oct 8 20:37:45.534146 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:37:45.535868 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:37:45.535981 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:37:45.538304 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:37:45.538435 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:37:45.540605 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:37:45.540759 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:37:45.541793 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:37:45.541844 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:37:45.542750 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:37:45.542798 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:37:45.543716 systemd[1]: Stopped target network.target - Network. Oct 8 20:37:45.544592 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:37:45.544662 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:37:45.545538 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:37:45.546500 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:37:45.550671 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:37:45.551217 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:37:45.552337 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:37:45.553274 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:37:45.553335 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:37:45.554211 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:37:45.554251 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:37:45.555098 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:37:45.555144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:37:45.556054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:37:45.556100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:37:45.556984 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:37:45.557033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:37:45.558056 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:37:45.559059 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:37:45.561681 systemd-networkd[774]: eth1: DHCPv6 lease lost Oct 8 20:37:45.564883 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:37:45.565012 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:37:45.565722 systemd-networkd[774]: eth0: DHCPv6 lease lost Oct 8 20:37:45.569123 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:37:45.569250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:37:45.571315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:37:45.571393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:37:45.577707 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:37:45.578147 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:37:45.578202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:37:45.578758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:37:45.578820 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:37:45.579338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:37:45.579383 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:37:45.580012 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:37:45.580056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:37:45.581237 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:37:45.594804 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:37:45.594917 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:37:45.595858 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:37:45.596009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:37:45.597520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:37:45.597582 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:37:45.599210 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:37:45.599247 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:37:45.600676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:37:45.600724 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:37:45.602602 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:37:45.602694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:37:45.604002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:37:45.604050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:37:45.610998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:37:45.611471 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:37:45.611520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:37:45.612086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:37:45.612132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:45.621601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:37:45.621718 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:37:45.623405 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:37:45.641867 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:37:45.648769 systemd[1]: Switching root. Oct 8 20:37:45.674806 systemd-journald[187]: Journal stopped Oct 8 20:37:46.687143 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 8 20:37:46.687227 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:37:46.687253 kernel: SELinux: policy capability open_perms=1 Oct 8 20:37:46.687272 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:37:46.687282 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:37:46.687291 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:37:46.687306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:37:46.687315 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:37:46.687325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:37:46.687334 kernel: audit: type=1403 audit(1728419865.811:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:37:46.687354 systemd[1]: Successfully loaded SELinux policy in 39.830ms. Oct 8 20:37:46.687368 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.592ms. Oct 8 20:37:46.687380 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:37:46.687391 systemd[1]: Detected virtualization kvm. Oct 8 20:37:46.687402 systemd[1]: Detected architecture x86-64. Oct 8 20:37:46.687412 systemd[1]: Detected first boot. Oct 8 20:37:46.687422 systemd[1]: Hostname set to . Oct 8 20:37:46.687436 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:37:46.687452 zram_generator::config[1053]: No configuration found. Oct 8 20:37:46.687465 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:37:46.687481 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:37:46.687492 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:37:46.687503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:37:46.687519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:37:46.687530 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:37:46.687544 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:37:46.687560 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:37:46.687572 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:37:46.687582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:37:46.687592 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:37:46.687604 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:37:46.687614 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:37:46.687646 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:37:46.687657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:37:46.687668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:37:46.687681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:37:46.687699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:37:46.687710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:37:46.687720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:37:46.687731 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:37:46.687741 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:37:46.687779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:37:46.687791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:37:46.687802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:37:46.687812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:37:46.687822 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:37:46.687833 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:37:46.687843 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:37:46.687853 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:37:46.687864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:37:46.687876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:37:46.687888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:37:46.687899 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:37:46.687909 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:37:46.687919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:37:46.687930 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:37:46.687941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:46.687956 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:37:46.687976 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:37:46.687993 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:37:46.688007 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:37:46.688019 systemd[1]: Reached target machines.target - Containers. Oct 8 20:37:46.688030 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:37:46.688040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:37:46.688059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:37:46.688070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:37:46.688080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:37:46.688090 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:37:46.688100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:37:46.688111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:37:46.688121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:37:46.688132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:37:46.688143 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:37:46.688155 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:37:46.688166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:37:46.688176 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:37:46.688186 kernel: fuse: init (API version 7.39) Oct 8 20:37:46.688197 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:37:46.688206 kernel: loop: module loaded Oct 8 20:37:46.688216 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:37:46.688226 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:37:46.688237 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:37:46.688250 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:37:46.688260 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:37:46.688270 systemd[1]: Stopped verity-setup.service. Oct 8 20:37:46.688285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:46.688295 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:37:46.688305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:37:46.688315 kernel: ACPI: bus type drm_connector registered Oct 8 20:37:46.688325 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:37:46.688337 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:37:46.688369 systemd-journald[1131]: Collecting audit messages is disabled. Oct 8 20:37:46.688390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:37:46.688403 systemd-journald[1131]: Journal started Oct 8 20:37:46.688426 systemd-journald[1131]: Runtime Journal (/run/log/journal/b5a29682dae146dbad6b1fb1c5728a1c) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:37:46.385438 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:37:46.411121 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:37:46.411804 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:37:46.691966 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:37:46.691999 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:37:46.694011 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:37:46.694842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:37:46.695642 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:37:46.695864 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:37:46.696587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:37:46.696755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:37:46.697450 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:37:46.697593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:37:46.698578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:37:46.698755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:37:46.699574 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:37:46.699928 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:37:46.700990 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:37:46.701157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:37:46.702049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:37:46.702927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:37:46.703742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:37:46.718789 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:37:46.726710 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:37:46.731821 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:37:46.733842 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:37:46.733941 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:37:46.735371 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:37:46.737589 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:37:46.740847 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:37:46.743069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:37:46.747911 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:37:46.750182 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:37:46.751148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:37:46.757802 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:37:46.760718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:37:46.761753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:37:46.764219 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:37:46.767744 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:37:46.771161 systemd-journald[1131]: Time spent on flushing to /var/log/journal/b5a29682dae146dbad6b1fb1c5728a1c is 55.728ms for 1130 entries. Oct 8 20:37:46.771161 systemd-journald[1131]: System Journal (/var/log/journal/b5a29682dae146dbad6b1fb1c5728a1c) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:37:46.860230 systemd-journald[1131]: Received client request to flush runtime journal. Oct 8 20:37:46.860308 kernel: loop0: detected capacity change from 0 to 205544 Oct 8 20:37:46.776905 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:37:46.778042 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:37:46.780135 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:37:46.811004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:37:46.812295 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:37:46.825841 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:37:46.862236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:37:46.878664 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:37:46.881036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:37:46.886108 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:37:46.902816 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:37:46.908029 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:37:46.916793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:37:46.918657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:37:46.929890 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:37:46.941664 kernel: loop1: detected capacity change from 0 to 8 Oct 8 20:37:46.952989 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 20:37:46.953348 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:37:46.953838 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 8 20:37:46.963912 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 20:37:46.964211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:37:47.013978 kernel: loop3: detected capacity change from 0 to 140768 Oct 8 20:37:47.055364 kernel: loop4: detected capacity change from 0 to 205544 Oct 8 20:37:47.075356 kernel: loop5: detected capacity change from 0 to 8 Oct 8 20:37:47.083654 kernel: loop6: detected capacity change from 0 to 142488 Oct 8 20:37:47.105655 kernel: loop7: detected capacity change from 0 to 140768 Oct 8 20:37:47.129130 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 20:37:47.129721 (sd-merge)[1198]: Merged extensions into '/usr'. Oct 8 20:37:47.136458 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:37:47.136556 systemd[1]: Reloading... Oct 8 20:37:47.230533 zram_generator::config[1224]: No configuration found. Oct 8 20:37:47.317225 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:37:47.349490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:37:47.391835 systemd[1]: Reloading finished in 254 ms. Oct 8 20:37:47.415751 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:37:47.416840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:37:47.427916 systemd[1]: Starting ensure-sysext.service... Oct 8 20:37:47.430062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:37:47.440760 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:37:47.440773 systemd[1]: Reloading... Oct 8 20:37:47.469212 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:37:47.469533 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:37:47.470421 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:37:47.470732 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 8 20:37:47.470982 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 8 20:37:47.474228 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:37:47.474309 systemd-tmpfiles[1268]: Skipping /boot Oct 8 20:37:47.487490 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:37:47.487501 systemd-tmpfiles[1268]: Skipping /boot Oct 8 20:37:47.521650 zram_generator::config[1291]: No configuration found. Oct 8 20:37:47.622745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:37:47.664953 systemd[1]: Reloading finished in 223 ms. Oct 8 20:37:47.681299 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:37:47.690106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:37:47.698052 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:37:47.700793 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:37:47.705031 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:37:47.714820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:37:47.719843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:37:47.727859 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:37:47.738936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:37:47.743955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.744166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:37:47.752861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:37:47.765697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:37:47.770959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:37:47.771610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:37:47.771797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.773937 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:37:47.793986 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:37:47.794991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:37:47.795160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:37:47.796244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:37:47.796401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:37:47.806836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.807329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:37:47.815998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:37:47.820287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:37:47.821865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:37:47.822036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.823010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:37:47.823974 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:37:47.829925 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Oct 8 20:37:47.830270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.830451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:37:47.838369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:37:47.839823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:37:47.839958 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:47.840390 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:37:47.842879 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:37:47.843081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:37:47.857578 systemd[1]: Finished ensure-sysext.service. Oct 8 20:37:47.864641 augenrules[1378]: No rules Oct 8 20:37:47.867543 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:37:47.869370 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:37:47.870268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:37:47.870434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:37:47.873084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:37:47.873259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:37:47.876238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:37:47.876319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:37:47.881126 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:37:47.882701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:37:47.892693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:37:47.894187 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:37:47.902606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:37:47.908889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:37:47.954169 systemd-resolved[1344]: Positive Trust Anchors: Oct 8 20:37:47.954185 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:37:47.954212 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:37:47.966329 systemd-resolved[1344]: Using system hostname 'ci-4081-1-0-a-d0274495d1'. Oct 8 20:37:47.977758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:37:47.979857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:37:47.993293 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:37:47.998362 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1393) Oct 8 20:37:48.022381 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:37:48.023825 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:37:48.033209 systemd-networkd[1392]: lo: Link UP Oct 8 20:37:48.033661 systemd-networkd[1392]: lo: Gained carrier Oct 8 20:37:48.035739 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1393) Oct 8 20:37:48.038163 systemd-networkd[1392]: Enumeration completed Oct 8 20:37:48.039026 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:37:48.039594 systemd[1]: Reached target network.target - Network. Oct 8 20:37:48.041606 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.041730 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:37:48.045124 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.045205 systemd-networkd[1392]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:37:48.048388 systemd-networkd[1392]: eth0: Link UP Oct 8 20:37:48.048402 systemd-networkd[1392]: eth0: Gained carrier Oct 8 20:37:48.048439 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.048916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:37:48.053997 systemd-networkd[1392]: eth1: Link UP Oct 8 20:37:48.054017 systemd-networkd[1392]: eth1: Gained carrier Oct 8 20:37:48.054084 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.066441 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.077925 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:37:48.092721 systemd-networkd[1392]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:37:48.094282 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Oct 8 20:37:48.105700 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 20:37:48.110676 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1398) Oct 8 20:37:48.119704 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:37:48.124680 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:37:48.166856 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 8 20:37:48.166923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:48.167030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:37:48.178104 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 20:37:48.190929 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 20:37:48.191124 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 20:37:48.179830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:37:48.183302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:37:48.184755 systemd-networkd[1392]: eth0: DHCPv4 address 91.107.220.127/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:37:48.185745 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Oct 8 20:37:48.196654 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:37:48.195892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:37:48.196539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:37:48.196573 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:37:48.196589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:37:48.198330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:37:48.198524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:37:48.204862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:37:48.205215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:37:48.206162 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:37:48.206320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:37:48.214991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:37:48.227818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:37:48.228358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:37:48.228401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:37:48.241916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:37:48.251927 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:37:48.255946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:48.267760 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 8 20:37:48.267807 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 8 20:37:48.272664 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:37:48.272707 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:37:48.272724 kernel: [drm] features: -context_init Oct 8 20:37:48.274657 kernel: [drm] number of scanouts: 1 Oct 8 20:37:48.276655 kernel: [drm] number of cap sets: 0 Oct 8 20:37:48.279695 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 20:37:48.284805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:37:48.285073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:48.288657 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:37:48.294639 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 20:37:48.295908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:48.302677 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:37:48.309548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:37:48.309898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:48.316846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:37:48.370269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:37:48.424896 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:37:48.431811 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:37:48.449566 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:37:48.485781 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:37:48.486197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:37:48.486310 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:37:48.486518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:37:48.486862 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:37:48.488972 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:37:48.489215 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:37:48.489293 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:37:48.489362 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:37:48.489385 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:37:48.489441 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:37:48.490044 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:37:48.492085 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:37:48.497914 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:37:48.500125 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:37:48.502367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:37:48.502531 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:37:48.502639 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:37:48.503334 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:37:48.503389 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:37:48.505761 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:37:48.509303 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:37:48.514794 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:37:48.520823 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:37:48.529498 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:37:48.537177 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:37:48.539754 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:37:48.542817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:37:48.552386 jq[1464]: false Oct 8 20:37:48.556723 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:37:48.559787 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 8 20:37:48.563985 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:37:48.567451 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:37:48.569932 coreos-metadata[1462]: Oct 08 20:37:48.569 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 20:37:48.572915 coreos-metadata[1462]: Oct 08 20:37:48.570 INFO Fetch successful Oct 8 20:37:48.572915 coreos-metadata[1462]: Oct 08 20:37:48.572 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 20:37:48.574068 coreos-metadata[1462]: Oct 08 20:37:48.573 INFO Fetch successful Oct 8 20:37:48.580137 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:37:48.583119 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:37:48.583603 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:37:48.584616 dbus-daemon[1463]: [system] SELinux support is enabled Oct 8 20:37:48.589634 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:37:48.593775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:37:48.595213 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:37:48.601833 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:37:48.609198 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:37:48.610747 extend-filesystems[1467]: Found loop4 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found loop5 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found loop6 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found loop7 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda1 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda2 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda3 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found usr Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda4 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda6 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda7 Oct 8 20:37:48.610747 extend-filesystems[1467]: Found sda9 Oct 8 20:37:48.610747 extend-filesystems[1467]: Checking size of /dev/sda9 Oct 8 20:37:48.710573 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 20:37:48.610973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:37:48.729981 jq[1478]: true Oct 8 20:37:48.730079 extend-filesystems[1467]: Resized partition /dev/sda9 Oct 8 20:37:48.626720 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:37:48.735447 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:37:48.626759 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:37:48.641885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:37:48.641910 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:37:48.696415 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:37:48.754399 tar[1481]: linux-amd64/helm Oct 8 20:37:48.696613 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:37:48.773060 update_engine[1475]: I20241008 20:37:48.766808 1475 main.cc:92] Flatcar Update Engine starting Oct 8 20:37:48.721005 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:37:48.773581 jq[1499]: true Oct 8 20:37:48.729121 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:37:48.729712 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:37:48.764815 systemd-logind[1474]: New seat seat0. Oct 8 20:37:48.779259 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Oct 8 20:37:48.779285 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:37:48.779502 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:37:48.783041 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:37:48.788664 update_engine[1475]: I20241008 20:37:48.788122 1475 update_check_scheduler.cc:74] Next update check in 4m5s Oct 8 20:37:48.800946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:37:48.840101 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 20:37:48.851980 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:37:48.859425 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:37:48.861900 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:37:48.861900 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 20:37:48.861900 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 20:37:48.871278 extend-filesystems[1467]: Resized filesystem in /dev/sda9 Oct 8 20:37:48.871278 extend-filesystems[1467]: Found sr0 Oct 8 20:37:48.862532 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:37:48.864879 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:37:48.888682 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1401) Oct 8 20:37:48.978031 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:37:48.980080 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:37:48.998240 systemd[1]: Starting sshkeys.service... Oct 8 20:37:49.035185 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:37:49.046530 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:37:49.091458 coreos-metadata[1545]: Oct 08 20:37:49.091 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 20:37:49.092831 coreos-metadata[1545]: Oct 08 20:37:49.092 INFO Fetch successful Oct 8 20:37:49.095175 unknown[1545]: wrote ssh authorized keys file for user: core Oct 8 20:37:49.105996 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:37:49.131864 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:37:49.132786 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:37:49.140184 systemd[1]: Finished sshkeys.service. Oct 8 20:37:49.145640 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:37:49.170708 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:37:49.181944 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:37:49.182343 containerd[1500]: time="2024-10-08T20:37:49.182272234Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:37:49.196119 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:37:49.196356 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:37:49.208849 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:37:49.215411 containerd[1500]: time="2024-10-08T20:37:49.215373272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.220813713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.220850021Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.220865841Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221050979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221072499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221135808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221147590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221324562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221338428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221350680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:37:49.222599 containerd[1500]: time="2024-10-08T20:37:49.221364446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.221459014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.221757143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.221896223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.221910961Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.222015798Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:37:49.223145 containerd[1500]: time="2024-10-08T20:37:49.222076712Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:37:49.227495 containerd[1500]: time="2024-10-08T20:37:49.227464364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:37:49.227544 containerd[1500]: time="2024-10-08T20:37:49.227523325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:37:49.227544 containerd[1500]: time="2024-10-08T20:37:49.227541620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:37:49.227607 containerd[1500]: time="2024-10-08T20:37:49.227555906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:37:49.227686 containerd[1500]: time="2024-10-08T20:37:49.227655422Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:37:49.227829 containerd[1500]: time="2024-10-08T20:37:49.227804863Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.228880159Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229002749Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229019370Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229031533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229043646Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229084702Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229101114Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229137842Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229151889Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229163801Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229174591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229184449Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229203616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.231510 containerd[1500]: time="2024-10-08T20:37:49.229214727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.229130 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229225667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229238050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229247909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229259270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229268878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229283446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229294226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229308923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229318932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229328210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229338278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229352224Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229375277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229385296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.233986 containerd[1500]: time="2024-10-08T20:37:49.229395465Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229799002Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229818188Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229830561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229903137Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229919818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229940828Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229950536Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:37:49.234523 containerd[1500]: time="2024-10-08T20:37:49.229959884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.230214962Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.230264565Z" level=info msg="Connect containerd service" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.230298338Z" level=info msg="using legacy CRI server" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.230305001Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.230384900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.231761653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232566722Z" level=info msg="Start subscribing containerd event" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232605044Z" level=info msg="Start recovering state" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232683010Z" level=info msg="Start event monitor" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232709399Z" level=info msg="Start snapshots syncer" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232720220Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.232728896Z" level=info msg="Start streaming server" Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.233156077Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.233221209Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:37:49.234701 containerd[1500]: time="2024-10-08T20:37:49.233320265Z" level=info msg="containerd successfully booted in 0.053507s" Oct 8 20:37:49.240547 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:37:49.246089 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:37:49.246958 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:37:49.249273 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:37:49.422456 tar[1481]: linux-amd64/LICENSE Oct 8 20:37:49.422757 tar[1481]: linux-amd64/README.md Oct 8 20:37:49.434022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:37:49.442754 systemd-networkd[1392]: eth1: Gained IPv6LL Oct 8 20:37:49.443287 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Oct 8 20:37:49.445847 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:37:49.447706 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:37:49.462142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:37:49.467061 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:37:49.494488 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:37:49.506890 systemd-networkd[1392]: eth0: Gained IPv6LL Oct 8 20:37:49.507344 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Oct 8 20:37:50.135049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:37:50.140409 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:37:50.141370 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:37:50.146383 systemd[1]: Startup finished in 1.254s (kernel) + 5.115s (initrd) + 4.373s (userspace) = 10.742s. Oct 8 20:37:50.591328 kubelet[1593]: E1008 20:37:50.591216 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:37:50.594493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:37:50.594712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:00.845152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:38:00.851067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:01.006679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:01.017933 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:01.058170 kubelet[1612]: E1008 20:38:01.058099 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:01.064338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:01.064591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:11.315059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:38:11.327819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:11.470541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:11.474573 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:11.512503 kubelet[1628]: E1008 20:38:11.512439 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:11.516200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:11.516395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:20.522621 systemd-timesyncd[1386]: Contacted time server 194.50.19.117:123 (2.flatcar.pool.ntp.org). Oct 8 20:38:20.522680 systemd-timesyncd[1386]: Initial clock synchronization to Tue 2024-10-08 20:38:20.522459 UTC. Oct 8 20:38:20.522801 systemd-resolved[1344]: Clock change detected. Flushing caches. Oct 8 20:38:22.339574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:38:22.344901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:22.482447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:22.493102 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:22.530990 kubelet[1643]: E1008 20:38:22.530896 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:22.534518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:22.534743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:32.589456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:38:32.594932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:32.715321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:32.719560 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:32.755377 kubelet[1658]: E1008 20:38:32.755290 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:32.758594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:32.758886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:35.095685 update_engine[1475]: I20241008 20:38:35.095578 1475 update_attempter.cc:509] Updating boot flags... Oct 8 20:38:35.140759 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1674) Oct 8 20:38:35.194090 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1678) Oct 8 20:38:35.245760 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1678) Oct 8 20:38:42.839510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:38:42.845081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:42.969314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:42.973650 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:43.007703 kubelet[1694]: E1008 20:38:43.007617 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:43.010472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:43.010672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:38:53.089546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:38:53.094880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:38:53.235170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:38:53.238802 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:38:53.267015 kubelet[1709]: E1008 20:38:53.266955 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:38:53.269858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:38:53.270091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:03.339645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:39:03.344928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:03.483913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:03.497127 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:03.538200 kubelet[1724]: E1008 20:39:03.538113 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:03.541274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:03.541497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:13.589605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 20:39:13.594910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:13.730850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:13.734872 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:13.771187 kubelet[1740]: E1008 20:39:13.771124 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:13.775081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:13.775274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:23.839489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 20:39:23.844880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:23.970796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:23.974833 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:24.004514 kubelet[1755]: E1008 20:39:24.004416 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:24.008032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:24.008230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:34.089676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 20:39:34.094907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:34.222491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:34.237119 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:34.274179 kubelet[1770]: E1008 20:39:34.274113 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:34.277690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:34.277903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:44.339614 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 20:39:44.347903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:44.472576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:44.476433 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:44.504799 kubelet[1785]: E1008 20:39:44.504742 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:44.508155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:44.508347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:51.143938 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:39:51.153023 systemd[1]: Started sshd@0-91.107.220.127:22-147.75.109.163:60884.service - OpenSSH per-connection server daemon (147.75.109.163:60884). Oct 8 20:39:52.124827 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 60884 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:39:52.127118 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:39:52.137501 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:39:52.155302 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:39:52.159648 systemd-logind[1474]: New session 1 of user core. Oct 8 20:39:52.167869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:39:52.178113 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:39:52.181522 (systemd)[1798]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:39:52.282438 systemd[1798]: Queued start job for default target default.target. Oct 8 20:39:52.288984 systemd[1798]: Created slice app.slice - User Application Slice. Oct 8 20:39:52.289010 systemd[1798]: Reached target paths.target - Paths. Oct 8 20:39:52.289023 systemd[1798]: Reached target timers.target - Timers. Oct 8 20:39:52.290522 systemd[1798]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:39:52.306396 systemd[1798]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:39:52.306528 systemd[1798]: Reached target sockets.target - Sockets. Oct 8 20:39:52.306543 systemd[1798]: Reached target basic.target - Basic System. Oct 8 20:39:52.306583 systemd[1798]: Reached target default.target - Main User Target. Oct 8 20:39:52.306616 systemd[1798]: Startup finished in 119ms. Oct 8 20:39:52.306765 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:39:52.313854 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:39:53.000993 systemd[1]: Started sshd@1-91.107.220.127:22-147.75.109.163:60900.service - OpenSSH per-connection server daemon (147.75.109.163:60900). Oct 8 20:39:53.991059 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 60900 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:39:53.992843 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:39:53.997753 systemd-logind[1474]: New session 2 of user core. Oct 8 20:39:54.007886 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:39:54.510607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 20:39:54.516174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:39:54.646873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:39:54.648112 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:39:54.670751 sshd[1809]: pam_unix(sshd:session): session closed for user core Oct 8 20:39:54.676197 systemd[1]: sshd@1-91.107.220.127:22-147.75.109.163:60900.service: Deactivated successfully. Oct 8 20:39:54.678901 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:39:54.680072 kubelet[1821]: E1008 20:39:54.679771 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:39:54.680359 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:39:54.681587 systemd-logind[1474]: Removed session 2. Oct 8 20:39:54.682169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:39:54.682335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:39:54.844014 systemd[1]: Started sshd@2-91.107.220.127:22-147.75.109.163:60916.service - OpenSSH per-connection server daemon (147.75.109.163:60916). Oct 8 20:39:55.806572 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 60916 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:39:55.808492 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:39:55.813545 systemd-logind[1474]: New session 3 of user core. Oct 8 20:39:55.824890 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:39:56.478445 sshd[1831]: pam_unix(sshd:session): session closed for user core Oct 8 20:39:56.483179 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:39:56.483621 systemd[1]: sshd@2-91.107.220.127:22-147.75.109.163:60916.service: Deactivated successfully. Oct 8 20:39:56.486019 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:39:56.486986 systemd-logind[1474]: Removed session 3. Oct 8 20:39:56.653260 systemd[1]: Started sshd@3-91.107.220.127:22-147.75.109.163:60922.service - OpenSSH per-connection server daemon (147.75.109.163:60922). Oct 8 20:39:57.640525 sshd[1838]: Accepted publickey for core from 147.75.109.163 port 60922 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:39:57.642200 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:39:57.647593 systemd-logind[1474]: New session 4 of user core. Oct 8 20:39:57.656949 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:39:58.329926 sshd[1838]: pam_unix(sshd:session): session closed for user core Oct 8 20:39:58.335158 systemd[1]: sshd@3-91.107.220.127:22-147.75.109.163:60922.service: Deactivated successfully. Oct 8 20:39:58.337735 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:39:58.338427 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:39:58.339625 systemd-logind[1474]: Removed session 4. Oct 8 20:39:58.490870 systemd[1]: Started sshd@4-91.107.220.127:22-147.75.109.163:56722.service - OpenSSH per-connection server daemon (147.75.109.163:56722). Oct 8 20:39:59.443327 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 56722 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:39:59.445218 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:39:59.450453 systemd-logind[1474]: New session 5 of user core. Oct 8 20:39:59.459875 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:39:59.961658 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:39:59.962077 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:39:59.979915 sudo[1848]: pam_unix(sudo:session): session closed for user root Oct 8 20:40:00.135645 sshd[1845]: pam_unix(sshd:session): session closed for user core Oct 8 20:40:00.139358 systemd[1]: sshd@4-91.107.220.127:22-147.75.109.163:56722.service: Deactivated successfully. Oct 8 20:40:00.141992 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:40:00.143650 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:40:00.145244 systemd-logind[1474]: Removed session 5. Oct 8 20:40:00.305963 systemd[1]: Started sshd@5-91.107.220.127:22-147.75.109.163:56730.service - OpenSSH per-connection server daemon (147.75.109.163:56730). Oct 8 20:40:01.269108 sshd[1853]: Accepted publickey for core from 147.75.109.163 port 56730 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:40:01.271185 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:40:01.276649 systemd-logind[1474]: New session 6 of user core. Oct 8 20:40:01.286915 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:40:01.782081 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:40:01.782508 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:40:01.787522 sudo[1857]: pam_unix(sudo:session): session closed for user root Oct 8 20:40:01.795623 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:40:01.796169 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:40:01.814952 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:40:01.817250 auditctl[1860]: No rules Oct 8 20:40:01.817836 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:40:01.818161 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:40:01.825184 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:40:01.850746 augenrules[1878]: No rules Oct 8 20:40:01.851684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:40:01.853322 sudo[1856]: pam_unix(sudo:session): session closed for user root Oct 8 20:40:02.009272 sshd[1853]: pam_unix(sshd:session): session closed for user core Oct 8 20:40:02.012687 systemd[1]: sshd@5-91.107.220.127:22-147.75.109.163:56730.service: Deactivated successfully. Oct 8 20:40:02.015321 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:40:02.016434 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:40:02.017952 systemd-logind[1474]: Removed session 6. Oct 8 20:40:02.187961 systemd[1]: Started sshd@6-91.107.220.127:22-147.75.109.163:56740.service - OpenSSH per-connection server daemon (147.75.109.163:56740). Oct 8 20:40:03.181922 sshd[1886]: Accepted publickey for core from 147.75.109.163 port 56740 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:40:03.183521 sshd[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:40:03.188269 systemd-logind[1474]: New session 7 of user core. Oct 8 20:40:03.197865 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:40:03.714603 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:40:03.715008 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:40:03.958131 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:40:03.958660 (dockerd)[1905]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:40:04.201179 dockerd[1905]: time="2024-10-08T20:40:04.200924757Z" level=info msg="Starting up" Oct 8 20:40:04.295815 dockerd[1905]: time="2024-10-08T20:40:04.295587489Z" level=info msg="Loading containers: start." Oct 8 20:40:04.402882 kernel: Initializing XFRM netlink socket Oct 8 20:40:04.486348 systemd-networkd[1392]: docker0: Link UP Oct 8 20:40:04.503182 dockerd[1905]: time="2024-10-08T20:40:04.503127630Z" level=info msg="Loading containers: done." Oct 8 20:40:04.520207 dockerd[1905]: time="2024-10-08T20:40:04.520153858Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:40:04.520368 dockerd[1905]: time="2024-10-08T20:40:04.520291363Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:40:04.520512 dockerd[1905]: time="2024-10-08T20:40:04.520481093Z" level=info msg="Daemon has completed initialization" Oct 8 20:40:04.550519 dockerd[1905]: time="2024-10-08T20:40:04.550362391Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:40:04.550446 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:40:04.839555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 20:40:04.845586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:04.974870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:04.979275 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:40:05.011138 kubelet[2050]: E1008 20:40:05.011096 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:40:05.013987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:40:05.014169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:40:05.131465 containerd[1500]: time="2024-10-08T20:40:05.131218894Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 20:40:05.712206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524368244.mount: Deactivated successfully. Oct 8 20:40:06.544851 containerd[1500]: time="2024-10-08T20:40:06.544798826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:06.545849 containerd[1500]: time="2024-10-08T20:40:06.545735574Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066713" Oct 8 20:40:06.546495 containerd[1500]: time="2024-10-08T20:40:06.546452996Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:06.548929 containerd[1500]: time="2024-10-08T20:40:06.548896279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:06.550208 containerd[1500]: time="2024-10-08T20:40:06.549782006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 1.418522733s" Oct 8 20:40:06.550208 containerd[1500]: time="2024-10-08T20:40:06.549812707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 8 20:40:06.551350 containerd[1500]: time="2024-10-08T20:40:06.551318542Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 20:40:07.710010 containerd[1500]: time="2024-10-08T20:40:07.709941200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:07.710945 containerd[1500]: time="2024-10-08T20:40:07.710903496Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690942" Oct 8 20:40:07.711601 containerd[1500]: time="2024-10-08T20:40:07.711542970Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:07.714081 containerd[1500]: time="2024-10-08T20:40:07.714023821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:07.716669 containerd[1500]: time="2024-10-08T20:40:07.716635273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.165289367s" Oct 8 20:40:07.718375 containerd[1500]: time="2024-10-08T20:40:07.716771543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 8 20:40:07.718638 containerd[1500]: time="2024-10-08T20:40:07.718618251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 20:40:08.725953 containerd[1500]: time="2024-10-08T20:40:08.725885091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:08.727044 containerd[1500]: time="2024-10-08T20:40:08.726838646Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646778" Oct 8 20:40:08.727821 containerd[1500]: time="2024-10-08T20:40:08.727769688Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:08.730225 containerd[1500]: time="2024-10-08T20:40:08.730190792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:08.731321 containerd[1500]: time="2024-10-08T20:40:08.731198587Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.012502631s" Oct 8 20:40:08.731321 containerd[1500]: time="2024-10-08T20:40:08.731224749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 8 20:40:08.731839 containerd[1500]: time="2024-10-08T20:40:08.731777809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 20:40:09.758813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120186905.mount: Deactivated successfully. Oct 8 20:40:10.059804 containerd[1500]: time="2024-10-08T20:40:10.057782899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:10.061213 containerd[1500]: time="2024-10-08T20:40:10.061166106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208907" Oct 8 20:40:10.062031 containerd[1500]: time="2024-10-08T20:40:10.061985932Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:10.063582 containerd[1500]: time="2024-10-08T20:40:10.063547027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:10.064222 containerd[1500]: time="2024-10-08T20:40:10.064110194Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 1.3321943s" Oct 8 20:40:10.064222 containerd[1500]: time="2024-10-08T20:40:10.064136836Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 8 20:40:10.064738 containerd[1500]: time="2024-10-08T20:40:10.064691736Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:40:10.615892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735069612.mount: Deactivated successfully. Oct 8 20:40:11.226136 containerd[1500]: time="2024-10-08T20:40:11.226080701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.227458 containerd[1500]: time="2024-10-08T20:40:11.227423870Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 8 20:40:11.228168 containerd[1500]: time="2024-10-08T20:40:11.228124106Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.230552 containerd[1500]: time="2024-10-08T20:40:11.230518076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.231898 containerd[1500]: time="2024-10-08T20:40:11.231569427Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.166836258s" Oct 8 20:40:11.231898 containerd[1500]: time="2024-10-08T20:40:11.231603746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:40:11.232300 containerd[1500]: time="2024-10-08T20:40:11.232111219Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 20:40:11.761216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123940785.mount: Deactivated successfully. Oct 8 20:40:11.765342 containerd[1500]: time="2024-10-08T20:40:11.765297927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.766362 containerd[1500]: time="2024-10-08T20:40:11.766304179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Oct 8 20:40:11.767017 containerd[1500]: time="2024-10-08T20:40:11.766945709Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.769174 containerd[1500]: time="2024-10-08T20:40:11.769130414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:11.770755 containerd[1500]: time="2024-10-08T20:40:11.770112308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 537.975387ms" Oct 8 20:40:11.770755 containerd[1500]: time="2024-10-08T20:40:11.770144261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 8 20:40:11.771304 containerd[1500]: time="2024-10-08T20:40:11.771274508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 20:40:12.269549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708378504.mount: Deactivated successfully. Oct 8 20:40:13.506200 containerd[1500]: time="2024-10-08T20:40:13.506124460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:13.507259 containerd[1500]: time="2024-10-08T20:40:13.507214684Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241790" Oct 8 20:40:13.507925 containerd[1500]: time="2024-10-08T20:40:13.507865688Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:13.510474 containerd[1500]: time="2024-10-08T20:40:13.510437977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:13.512021 containerd[1500]: time="2024-10-08T20:40:13.511971525Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.740667808s" Oct 8 20:40:13.512021 containerd[1500]: time="2024-10-08T20:40:13.512013318Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 8 20:40:15.089480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Oct 8 20:40:15.097989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:15.238865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:15.239757 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:40:15.272734 kubelet[2256]: E1008 20:40:15.271273 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:40:15.274783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:40:15.274959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:40:15.485848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:15.494970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:15.528346 systemd[1]: Reloading requested from client PID 2270 ('systemctl') (unit session-7.scope)... Oct 8 20:40:15.528509 systemd[1]: Reloading... Oct 8 20:40:15.677747 zram_generator::config[2313]: No configuration found. Oct 8 20:40:15.764800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:40:15.828449 systemd[1]: Reloading finished in 299 ms. Oct 8 20:40:15.879274 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:40:15.879371 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:40:15.879586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:15.881928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:15.998612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:16.002540 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:40:16.037054 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:40:16.037482 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:40:16.037555 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:40:16.038470 kubelet[2365]: I1008 20:40:16.038435 2365 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:40:16.258681 kubelet[2365]: I1008 20:40:16.258638 2365 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:40:16.258681 kubelet[2365]: I1008 20:40:16.258669 2365 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:40:16.258994 kubelet[2365]: I1008 20:40:16.258969 2365 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:40:16.284222 kubelet[2365]: I1008 20:40:16.284005 2365 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:40:16.284222 kubelet[2365]: E1008 20:40:16.284196 2365 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.107.220.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:16.293871 kubelet[2365]: E1008 20:40:16.293692 2365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:40:16.293871 kubelet[2365]: I1008 20:40:16.293734 2365 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:40:16.297766 kubelet[2365]: I1008 20:40:16.297745 2365 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:40:16.298764 kubelet[2365]: I1008 20:40:16.298740 2365 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:40:16.298922 kubelet[2365]: I1008 20:40:16.298886 2365 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:40:16.299044 kubelet[2365]: I1008 20:40:16.298913 2365 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-a-d0274495d1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:40:16.299123 kubelet[2365]: I1008 20:40:16.299047 2365 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:40:16.299123 kubelet[2365]: I1008 20:40:16.299055 2365 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:40:16.299166 kubelet[2365]: I1008 20:40:16.299148 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:40:16.300926 kubelet[2365]: I1008 20:40:16.300689 2365 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:40:16.300926 kubelet[2365]: I1008 20:40:16.300708 2365 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:40:16.300926 kubelet[2365]: I1008 20:40:16.300793 2365 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:40:16.300926 kubelet[2365]: I1008 20:40:16.300812 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:40:16.306368 kubelet[2365]: W1008 20:40:16.306276 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.220.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-a-d0274495d1&limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:16.306368 kubelet[2365]: E1008 20:40:16.306359 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.220.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-a-d0274495d1&limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:16.307008 kubelet[2365]: I1008 20:40:16.306431 2365 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:40:16.308912 kubelet[2365]: I1008 20:40:16.308884 2365 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:40:16.310226 kubelet[2365]: W1008 20:40:16.309426 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:40:16.310226 kubelet[2365]: I1008 20:40:16.310110 2365 server.go:1269] "Started kubelet" Oct 8 20:40:16.312507 kubelet[2365]: W1008 20:40:16.312131 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.220.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:16.312507 kubelet[2365]: E1008 20:40:16.312167 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.220.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:16.312507 kubelet[2365]: I1008 20:40:16.312227 2365 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:40:16.314460 kubelet[2365]: I1008 20:40:16.314098 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:40:16.314460 kubelet[2365]: I1008 20:40:16.314394 2365 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:40:16.315309 kubelet[2365]: I1008 20:40:16.315289 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:40:16.318217 kubelet[2365]: E1008 20:40:16.315184 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.220.127:6443/api/v1/namespaces/default/events\": dial tcp 91.107.220.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-a-d0274495d1.17fc94d8e6d680da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-a-d0274495d1,UID:ci-4081-1-0-a-d0274495d1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-a-d0274495d1,},FirstTimestamp:2024-10-08 20:40:16.310091994 +0000 UTC m=+0.304150644,LastTimestamp:2024-10-08 20:40:16.310091994 +0000 UTC m=+0.304150644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-a-d0274495d1,}" Oct 8 20:40:16.318217 kubelet[2365]: I1008 20:40:16.318091 2365 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:40:16.322640 kubelet[2365]: I1008 20:40:16.320208 2365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:40:16.324270 kubelet[2365]: I1008 20:40:16.324251 2365 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:40:16.324383 kubelet[2365]: E1008 20:40:16.324363 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:16.326215 kubelet[2365]: I1008 20:40:16.323336 2365 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:40:16.326435 kubelet[2365]: I1008 20:40:16.326417 2365 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:40:16.326768 kubelet[2365]: E1008 20:40:16.326737 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.220.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-a-d0274495d1?timeout=10s\": dial tcp 91.107.220.127:6443: connect: connection refused" interval="200ms" Oct 8 20:40:16.326947 kubelet[2365]: I1008 20:40:16.326925 2365 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:40:16.327027 kubelet[2365]: I1008 20:40:16.327005 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:40:16.328421 kubelet[2365]: W1008 20:40:16.328378 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.220.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:16.328421 kubelet[2365]: E1008 20:40:16.328414 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.220.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:16.328705 kubelet[2365]: E1008 20:40:16.328682 2365 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:40:16.328857 kubelet[2365]: I1008 20:40:16.328838 2365 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:40:16.340999 kubelet[2365]: I1008 20:40:16.340946 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:40:16.342183 kubelet[2365]: I1008 20:40:16.341960 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:40:16.342183 kubelet[2365]: I1008 20:40:16.341986 2365 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:40:16.342183 kubelet[2365]: I1008 20:40:16.341999 2365 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:40:16.342183 kubelet[2365]: E1008 20:40:16.342031 2365 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:40:16.350819 kubelet[2365]: W1008 20:40:16.350796 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.220.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:16.350870 kubelet[2365]: E1008 20:40:16.350829 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.220.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:16.356755 kubelet[2365]: I1008 20:40:16.356691 2365 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:40:16.356755 kubelet[2365]: I1008 20:40:16.356704 2365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:40:16.356882 kubelet[2365]: I1008 20:40:16.356794 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:40:16.358807 kubelet[2365]: I1008 20:40:16.358779 2365 policy_none.go:49] "None policy: Start" Oct 8 20:40:16.359221 kubelet[2365]: I1008 20:40:16.359201 2365 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:40:16.359221 kubelet[2365]: I1008 20:40:16.359220 2365 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:40:16.364500 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:40:16.372995 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:40:16.375829 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:40:16.385787 kubelet[2365]: I1008 20:40:16.385628 2365 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:40:16.385857 kubelet[2365]: I1008 20:40:16.385838 2365 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:40:16.385886 kubelet[2365]: I1008 20:40:16.385848 2365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:40:16.386602 kubelet[2365]: I1008 20:40:16.386161 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:40:16.387834 kubelet[2365]: E1008 20:40:16.387820 2365 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:16.453650 systemd[1]: Created slice kubepods-burstable-pod1d8b04b8c51fb449d3609067f3c9306a.slice - libcontainer container kubepods-burstable-pod1d8b04b8c51fb449d3609067f3c9306a.slice. Oct 8 20:40:16.470934 systemd[1]: Created slice kubepods-burstable-pod8331cedc6e32ed305ada64b943d13f3f.slice - libcontainer container kubepods-burstable-pod8331cedc6e32ed305ada64b943d13f3f.slice. Oct 8 20:40:16.479241 systemd[1]: Created slice kubepods-burstable-podc5b60dc8f745b5ea3cbccd1668e3d3b4.slice - libcontainer container kubepods-burstable-podc5b60dc8f745b5ea3cbccd1668e3d3b4.slice. Oct 8 20:40:16.487309 kubelet[2365]: I1008 20:40:16.487275 2365 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.487576 kubelet[2365]: E1008 20:40:16.487549 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.220.127:6443/api/v1/nodes\": dial tcp 91.107.220.127:6443: connect: connection refused" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.527650 kubelet[2365]: E1008 20:40:16.527602 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.220.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-a-d0274495d1?timeout=10s\": dial tcp 91.107.220.127:6443: connect: connection refused" interval="400ms" Oct 8 20:40:16.529769 kubelet[2365]: I1008 20:40:16.529739 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529848 kubelet[2365]: I1008 20:40:16.529770 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d8b04b8c51fb449d3609067f3c9306a-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-a-d0274495d1\" (UID: \"1d8b04b8c51fb449d3609067f3c9306a\") " pod="kube-system/kube-scheduler-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529848 kubelet[2365]: I1008 20:40:16.529787 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529848 kubelet[2365]: I1008 20:40:16.529802 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529848 kubelet[2365]: I1008 20:40:16.529817 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529848 kubelet[2365]: I1008 20:40:16.529832 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529992 kubelet[2365]: I1008 20:40:16.529849 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529992 kubelet[2365]: I1008 20:40:16.529864 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.529992 kubelet[2365]: I1008 20:40:16.529897 2365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.690049 kubelet[2365]: I1008 20:40:16.690018 2365 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.690348 kubelet[2365]: E1008 20:40:16.690308 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.220.127:6443/api/v1/nodes\": dial tcp 91.107.220.127:6443: connect: connection refused" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:16.769683 containerd[1500]: time="2024-10-08T20:40:16.769643363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-a-d0274495d1,Uid:1d8b04b8c51fb449d3609067f3c9306a,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:16.781518 containerd[1500]: time="2024-10-08T20:40:16.781480436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-a-d0274495d1,Uid:8331cedc6e32ed305ada64b943d13f3f,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:16.781979 containerd[1500]: time="2024-10-08T20:40:16.781948446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-a-d0274495d1,Uid:c5b60dc8f745b5ea3cbccd1668e3d3b4,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:16.928902 kubelet[2365]: E1008 20:40:16.928853 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.220.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-a-d0274495d1?timeout=10s\": dial tcp 91.107.220.127:6443: connect: connection refused" interval="800ms" Oct 8 20:40:17.092269 kubelet[2365]: I1008 20:40:17.092240 2365 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:17.092642 kubelet[2365]: E1008 20:40:17.092509 2365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.220.127:6443/api/v1/nodes\": dial tcp 91.107.220.127:6443: connect: connection refused" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:17.182755 kubelet[2365]: W1008 20:40:17.182697 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.220.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:17.182867 kubelet[2365]: E1008 20:40:17.182768 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.220.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:17.189385 kubelet[2365]: W1008 20:40:17.189330 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.220.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:17.189448 kubelet[2365]: E1008 20:40:17.189390 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.220.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:17.292156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636977842.mount: Deactivated successfully. Oct 8 20:40:17.296943 containerd[1500]: time="2024-10-08T20:40:17.296889072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:40:17.298110 containerd[1500]: time="2024-10-08T20:40:17.297983822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:40:17.298705 containerd[1500]: time="2024-10-08T20:40:17.298670771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:40:17.299444 containerd[1500]: time="2024-10-08T20:40:17.299390516Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:40:17.300727 containerd[1500]: time="2024-10-08T20:40:17.300617937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 8 20:40:17.301560 containerd[1500]: time="2024-10-08T20:40:17.301439842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:40:17.301560 containerd[1500]: time="2024-10-08T20:40:17.301502935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:40:17.304015 containerd[1500]: time="2024-10-08T20:40:17.303971956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:40:17.305626 containerd[1500]: time="2024-10-08T20:40:17.305518765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.80308ms" Oct 8 20:40:17.307530 containerd[1500]: time="2024-10-08T20:40:17.306951650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.307412ms" Oct 8 20:40:17.308016 containerd[1500]: time="2024-10-08T20:40:17.307903090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.889185ms" Oct 8 20:40:17.421939 kubelet[2365]: W1008 20:40:17.421618 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.220.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:17.421939 kubelet[2365]: E1008 20:40:17.421684 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.220.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:17.428001 containerd[1500]: time="2024-10-08T20:40:17.427926984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:17.428265 containerd[1500]: time="2024-10-08T20:40:17.428216633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:17.428342 containerd[1500]: time="2024-10-08T20:40:17.428303012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:17.428761 containerd[1500]: time="2024-10-08T20:40:17.428456524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.429819 containerd[1500]: time="2024-10-08T20:40:17.429760866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:17.429880 containerd[1500]: time="2024-10-08T20:40:17.429806907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.430066 containerd[1500]: time="2024-10-08T20:40:17.430033091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.431182 containerd[1500]: time="2024-10-08T20:40:17.430475751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.431247 containerd[1500]: time="2024-10-08T20:40:17.431131479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:17.431247 containerd[1500]: time="2024-10-08T20:40:17.431187679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:17.431247 containerd[1500]: time="2024-10-08T20:40:17.431206728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.433462 containerd[1500]: time="2024-10-08T20:40:17.431966270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:17.451006 systemd[1]: Started cri-containerd-150ed614f1af41c0c6037f25e78eb152abd20d520bf611011fbe5d58fba5f1ff.scope - libcontainer container 150ed614f1af41c0c6037f25e78eb152abd20d520bf611011fbe5d58fba5f1ff. Oct 8 20:40:17.462842 systemd[1]: Started cri-containerd-a0516f72e59aaeb66814bc35de3e0c7970a2a17bbc30b0dc009fb5407c885508.scope - libcontainer container a0516f72e59aaeb66814bc35de3e0c7970a2a17bbc30b0dc009fb5407c885508. Oct 8 20:40:17.465920 systemd[1]: Started cri-containerd-061069605e55d6c8bea0f4c4b416a6ab81a4b726b0725202f94583457d8fdcdb.scope - libcontainer container 061069605e55d6c8bea0f4c4b416a6ab81a4b726b0725202f94583457d8fdcdb. Oct 8 20:40:17.517832 containerd[1500]: time="2024-10-08T20:40:17.517588109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-a-d0274495d1,Uid:8331cedc6e32ed305ada64b943d13f3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"150ed614f1af41c0c6037f25e78eb152abd20d520bf611011fbe5d58fba5f1ff\"" Oct 8 20:40:17.522371 containerd[1500]: time="2024-10-08T20:40:17.521916973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-a-d0274495d1,Uid:c5b60dc8f745b5ea3cbccd1668e3d3b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0516f72e59aaeb66814bc35de3e0c7970a2a17bbc30b0dc009fb5407c885508\"" Oct 8 20:40:17.525691 containerd[1500]: time="2024-10-08T20:40:17.525651139Z" level=info msg="CreateContainer within sandbox \"150ed614f1af41c0c6037f25e78eb152abd20d520bf611011fbe5d58fba5f1ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:40:17.527204 containerd[1500]: time="2024-10-08T20:40:17.527183299Z" level=info msg="CreateContainer within sandbox \"a0516f72e59aaeb66814bc35de3e0c7970a2a17bbc30b0dc009fb5407c885508\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:40:17.530492 containerd[1500]: time="2024-10-08T20:40:17.530472170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-a-d0274495d1,Uid:1d8b04b8c51fb449d3609067f3c9306a,Namespace:kube-system,Attempt:0,} returns sandbox id \"061069605e55d6c8bea0f4c4b416a6ab81a4b726b0725202f94583457d8fdcdb\"" Oct 8 20:40:17.534207 containerd[1500]: time="2024-10-08T20:40:17.534173070Z" level=info msg="CreateContainer within sandbox \"061069605e55d6c8bea0f4c4b416a6ab81a4b726b0725202f94583457d8fdcdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:40:17.546610 containerd[1500]: time="2024-10-08T20:40:17.546585476Z" level=info msg="CreateContainer within sandbox \"a0516f72e59aaeb66814bc35de3e0c7970a2a17bbc30b0dc009fb5407c885508\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7362afdbab889c9465f60f2e71f110981e247ae8d8d5ef1431e6624f34f42eaf\"" Oct 8 20:40:17.547160 containerd[1500]: time="2024-10-08T20:40:17.547121118Z" level=info msg="StartContainer for \"7362afdbab889c9465f60f2e71f110981e247ae8d8d5ef1431e6624f34f42eaf\"" Oct 8 20:40:17.552499 containerd[1500]: time="2024-10-08T20:40:17.552414677Z" level=info msg="CreateContainer within sandbox \"150ed614f1af41c0c6037f25e78eb152abd20d520bf611011fbe5d58fba5f1ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11a9274209933f7bc88dd420741edd350c2dc9106617de429539897b00131884\"" Oct 8 20:40:17.553764 containerd[1500]: time="2024-10-08T20:40:17.553113330Z" level=info msg="StartContainer for \"11a9274209933f7bc88dd420741edd350c2dc9106617de429539897b00131884\"" Oct 8 20:40:17.554033 containerd[1500]: time="2024-10-08T20:40:17.554013338Z" level=info msg="CreateContainer within sandbox \"061069605e55d6c8bea0f4c4b416a6ab81a4b726b0725202f94583457d8fdcdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2facef62422e2aeb46392a22107ccbfb1a68a01dccf584803295bc4ec42b95fe\"" Oct 8 20:40:17.554629 containerd[1500]: time="2024-10-08T20:40:17.554601604Z" level=info msg="StartContainer for \"2facef62422e2aeb46392a22107ccbfb1a68a01dccf584803295bc4ec42b95fe\"" Oct 8 20:40:17.580876 systemd[1]: Started cri-containerd-7362afdbab889c9465f60f2e71f110981e247ae8d8d5ef1431e6624f34f42eaf.scope - libcontainer container 7362afdbab889c9465f60f2e71f110981e247ae8d8d5ef1431e6624f34f42eaf. Oct 8 20:40:17.600859 systemd[1]: Started cri-containerd-11a9274209933f7bc88dd420741edd350c2dc9106617de429539897b00131884.scope - libcontainer container 11a9274209933f7bc88dd420741edd350c2dc9106617de429539897b00131884. Oct 8 20:40:17.605568 systemd[1]: Started cri-containerd-2facef62422e2aeb46392a22107ccbfb1a68a01dccf584803295bc4ec42b95fe.scope - libcontainer container 2facef62422e2aeb46392a22107ccbfb1a68a01dccf584803295bc4ec42b95fe. Oct 8 20:40:17.660130 containerd[1500]: time="2024-10-08T20:40:17.660083379Z" level=info msg="StartContainer for \"11a9274209933f7bc88dd420741edd350c2dc9106617de429539897b00131884\" returns successfully" Oct 8 20:40:17.665173 containerd[1500]: time="2024-10-08T20:40:17.665069685Z" level=info msg="StartContainer for \"7362afdbab889c9465f60f2e71f110981e247ae8d8d5ef1431e6624f34f42eaf\" returns successfully" Oct 8 20:40:17.671420 containerd[1500]: time="2024-10-08T20:40:17.670921081Z" level=info msg="StartContainer for \"2facef62422e2aeb46392a22107ccbfb1a68a01dccf584803295bc4ec42b95fe\" returns successfully" Oct 8 20:40:17.719960 kubelet[2365]: W1008 20:40:17.719804 2365 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.220.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-a-d0274495d1&limit=500&resourceVersion=0": dial tcp 91.107.220.127:6443: connect: connection refused Oct 8 20:40:17.719960 kubelet[2365]: E1008 20:40:17.719914 2365 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.220.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-a-d0274495d1&limit=500&resourceVersion=0\": dial tcp 91.107.220.127:6443: connect: connection refused" logger="UnhandledError" Oct 8 20:40:17.729481 kubelet[2365]: E1008 20:40:17.729450 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.220.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-a-d0274495d1?timeout=10s\": dial tcp 91.107.220.127:6443: connect: connection refused" interval="1.6s" Oct 8 20:40:17.895670 kubelet[2365]: I1008 20:40:17.895639 2365 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:19.154231 kubelet[2365]: I1008 20:40:19.154176 2365 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:19.154906 kubelet[2365]: E1008 20:40:19.154205 2365 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-1-0-a-d0274495d1\": node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.166476 kubelet[2365]: E1008 20:40:19.166441 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.267290 kubelet[2365]: E1008 20:40:19.267214 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.368234 kubelet[2365]: E1008 20:40:19.368183 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.389192 kubelet[2365]: E1008 20:40:19.389164 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Oct 8 20:40:19.469195 kubelet[2365]: E1008 20:40:19.469059 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.569909 kubelet[2365]: E1008 20:40:19.569862 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.670795 kubelet[2365]: E1008 20:40:19.670745 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.770955 kubelet[2365]: E1008 20:40:19.770836 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.871169 kubelet[2365]: E1008 20:40:19.871127 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:19.971443 kubelet[2365]: E1008 20:40:19.971398 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.072845 kubelet[2365]: E1008 20:40:20.072460 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.173053 kubelet[2365]: E1008 20:40:20.173010 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.273674 kubelet[2365]: E1008 20:40:20.273610 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.374771 kubelet[2365]: E1008 20:40:20.374723 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.475665 kubelet[2365]: E1008 20:40:20.475617 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.576236 kubelet[2365]: E1008 20:40:20.576191 2365 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-a-d0274495d1\" not found" Oct 8 20:40:20.952604 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... Oct 8 20:40:20.952621 systemd[1]: Reloading... Oct 8 20:40:21.046743 zram_generator::config[2684]: No configuration found. Oct 8 20:40:21.154397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:40:21.228747 systemd[1]: Reloading finished in 275 ms. Oct 8 20:40:21.270308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:21.282992 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:40:21.283204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:21.289909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:40:21.406250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:40:21.410340 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:40:21.448557 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:40:21.448557 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:40:21.448557 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:40:21.449233 kubelet[2732]: I1008 20:40:21.449153 2732 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:40:21.456527 kubelet[2732]: I1008 20:40:21.456500 2732 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 20:40:21.456589 kubelet[2732]: I1008 20:40:21.456545 2732 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:40:21.456980 kubelet[2732]: I1008 20:40:21.456902 2732 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 20:40:21.458131 kubelet[2732]: I1008 20:40:21.458112 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:40:21.462455 kubelet[2732]: I1008 20:40:21.462343 2732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:40:21.465148 kubelet[2732]: E1008 20:40:21.465117 2732 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 20:40:21.465148 kubelet[2732]: I1008 20:40:21.465143 2732 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 20:40:21.468047 kubelet[2732]: I1008 20:40:21.468029 2732 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:40:21.468648 kubelet[2732]: I1008 20:40:21.468628 2732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 20:40:21.468804 kubelet[2732]: I1008 20:40:21.468767 2732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:40:21.468940 kubelet[2732]: I1008 20:40:21.468804 2732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-a-d0274495d1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 20:40:21.468940 kubelet[2732]: I1008 20:40:21.468938 2732 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:40:21.469041 kubelet[2732]: I1008 20:40:21.468946 2732 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 20:40:21.469041 kubelet[2732]: I1008 20:40:21.468972 2732 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:40:21.469085 kubelet[2732]: I1008 20:40:21.469060 2732 kubelet.go:408] "Attempting to sync node with API server" Oct 8 20:40:21.469085 kubelet[2732]: I1008 20:40:21.469071 2732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:40:21.470179 kubelet[2732]: I1008 20:40:21.469612 2732 kubelet.go:314] "Adding apiserver pod source" Oct 8 20:40:21.470179 kubelet[2732]: I1008 20:40:21.469631 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:40:21.475625 kubelet[2732]: I1008 20:40:21.474166 2732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:40:21.475625 kubelet[2732]: I1008 20:40:21.474502 2732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:40:21.475625 kubelet[2732]: I1008 20:40:21.475162 2732 server.go:1269] "Started kubelet" Oct 8 20:40:21.476961 kubelet[2732]: I1008 20:40:21.476935 2732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:40:21.477971 kubelet[2732]: I1008 20:40:21.477931 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:40:21.479771 kubelet[2732]: I1008 20:40:21.478162 2732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:40:21.479771 kubelet[2732]: I1008 20:40:21.478545 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:40:21.484737 kubelet[2732]: E1008 20:40:21.484697 2732 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:40:21.485909 kubelet[2732]: I1008 20:40:21.485867 2732 server.go:460] "Adding debug handlers to kubelet server" Oct 8 20:40:21.486467 kubelet[2732]: I1008 20:40:21.486444 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 20:40:21.491453 kubelet[2732]: I1008 20:40:21.490771 2732 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 20:40:21.491453 kubelet[2732]: I1008 20:40:21.491107 2732 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 20:40:21.492369 kubelet[2732]: I1008 20:40:21.492339 2732 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:40:21.493927 kubelet[2732]: I1008 20:40:21.493855 2732 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:40:21.495026 kubelet[2732]: I1008 20:40:21.494991 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:40:21.497037 kubelet[2732]: I1008 20:40:21.496748 2732 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:40:21.501363 kubelet[2732]: I1008 20:40:21.501273 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:40:21.502353 kubelet[2732]: I1008 20:40:21.502340 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:40:21.502409 kubelet[2732]: I1008 20:40:21.502400 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:40:21.502751 kubelet[2732]: I1008 20:40:21.502458 2732 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 20:40:21.502751 kubelet[2732]: E1008 20:40:21.502491 2732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:40:21.533488 kubelet[2732]: I1008 20:40:21.533468 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.533570 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.533586 2732 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.533698 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.533707 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.533744 2732 policy_none.go:49] "None policy: Start" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.534218 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:40:21.534347 kubelet[2732]: I1008 20:40:21.534261 2732 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:40:21.534491 kubelet[2732]: I1008 20:40:21.534435 2732 state_mem.go:75] "Updated machine memory state" Oct 8 20:40:21.541472 kubelet[2732]: I1008 20:40:21.541458 2732 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:40:21.542053 kubelet[2732]: I1008 20:40:21.541699 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 20:40:21.542053 kubelet[2732]: I1008 20:40:21.541733 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:40:21.542924 kubelet[2732]: I1008 20:40:21.542913 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:40:21.608434 kubelet[2732]: E1008 20:40:21.608399 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-1-0-a-d0274495d1\" already exists" pod="kube-system/kube-scheduler-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.650322 kubelet[2732]: I1008 20:40:21.650273 2732 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.656337 kubelet[2732]: I1008 20:40:21.656304 2732 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.656430 kubelet[2732]: I1008 20:40:21.656361 2732 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694297 kubelet[2732]: I1008 20:40:21.694271 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694396 kubelet[2732]: I1008 20:40:21.694304 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694396 kubelet[2732]: I1008 20:40:21.694324 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d8b04b8c51fb449d3609067f3c9306a-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-a-d0274495d1\" (UID: \"1d8b04b8c51fb449d3609067f3c9306a\") " pod="kube-system/kube-scheduler-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694396 kubelet[2732]: I1008 20:40:21.694341 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694396 kubelet[2732]: I1008 20:40:21.694359 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694396 kubelet[2732]: I1008 20:40:21.694376 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8331cedc6e32ed305ada64b943d13f3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" (UID: \"8331cedc6e32ed305ada64b943d13f3f\") " pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694544 kubelet[2732]: I1008 20:40:21.694392 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694544 kubelet[2732]: I1008 20:40:21.694408 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:21.694544 kubelet[2732]: I1008 20:40:21.694426 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5b60dc8f745b5ea3cbccd1668e3d3b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-a-d0274495d1\" (UID: \"c5b60dc8f745b5ea3cbccd1668e3d3b4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:22.474017 kubelet[2732]: I1008 20:40:22.473962 2732 apiserver.go:52] "Watching apiserver" Oct 8 20:40:22.492195 kubelet[2732]: I1008 20:40:22.492153 2732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 20:40:22.541129 kubelet[2732]: E1008 20:40:22.540970 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-a-d0274495d1\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" Oct 8 20:40:22.547369 kubelet[2732]: I1008 20:40:22.547320 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-a-d0274495d1" podStartSLOduration=1.547308015 podStartE2EDuration="1.547308015s" podCreationTimestamp="2024-10-08 20:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:40:22.545095782 +0000 UTC m=+1.131574917" watchObservedRunningTime="2024-10-08 20:40:22.547308015 +0000 UTC m=+1.133787139" Oct 8 20:40:22.564352 kubelet[2732]: I1008 20:40:22.563033 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-a-d0274495d1" podStartSLOduration=1.563015255 podStartE2EDuration="1.563015255s" podCreationTimestamp="2024-10-08 20:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:40:22.560296554 +0000 UTC m=+1.146775678" watchObservedRunningTime="2024-10-08 20:40:22.563015255 +0000 UTC m=+1.149494380" Oct 8 20:40:22.581379 kubelet[2732]: I1008 20:40:22.580910 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-a-d0274495d1" podStartSLOduration=1.580896705 podStartE2EDuration="1.580896705s" podCreationTimestamp="2024-10-08 20:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:40:22.569275057 +0000 UTC m=+1.155754182" watchObservedRunningTime="2024-10-08 20:40:22.580896705 +0000 UTC m=+1.167375830" Oct 8 20:40:26.179702 sudo[1889]: pam_unix(sudo:session): session closed for user root Oct 8 20:40:26.343481 sshd[1886]: pam_unix(sshd:session): session closed for user core Oct 8 20:40:26.347334 systemd[1]: sshd@6-91.107.220.127:22-147.75.109.163:56740.service: Deactivated successfully. Oct 8 20:40:26.349962 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:40:26.350177 systemd[1]: session-7.scope: Consumed 3.691s CPU time, 145.7M memory peak, 0B memory swap peak. Oct 8 20:40:26.352203 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:40:26.353754 systemd-logind[1474]: Removed session 7. Oct 8 20:40:27.360238 kubelet[2732]: I1008 20:40:27.359814 2732 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:40:27.360610 containerd[1500]: time="2024-10-08T20:40:27.360175899Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:40:27.361352 kubelet[2732]: I1008 20:40:27.360870 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:40:28.295242 systemd[1]: Created slice kubepods-besteffort-pode919712a_6b32_430e_805d_85471f433257.slice - libcontainer container kubepods-besteffort-pode919712a_6b32_430e_805d_85471f433257.slice. Oct 8 20:40:28.336325 kubelet[2732]: I1008 20:40:28.336244 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e919712a-6b32-430e-805d-85471f433257-xtables-lock\") pod \"kube-proxy-dbk2r\" (UID: \"e919712a-6b32-430e-805d-85471f433257\") " pod="kube-system/kube-proxy-dbk2r" Oct 8 20:40:28.336605 kubelet[2732]: I1008 20:40:28.336435 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlxr\" (UniqueName: \"kubernetes.io/projected/e919712a-6b32-430e-805d-85471f433257-kube-api-access-7hlxr\") pod \"kube-proxy-dbk2r\" (UID: \"e919712a-6b32-430e-805d-85471f433257\") " pod="kube-system/kube-proxy-dbk2r" Oct 8 20:40:28.336870 kubelet[2732]: I1008 20:40:28.336784 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e919712a-6b32-430e-805d-85471f433257-kube-proxy\") pod \"kube-proxy-dbk2r\" (UID: \"e919712a-6b32-430e-805d-85471f433257\") " pod="kube-system/kube-proxy-dbk2r" Oct 8 20:40:28.336870 kubelet[2732]: I1008 20:40:28.336804 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e919712a-6b32-430e-805d-85471f433257-lib-modules\") pod \"kube-proxy-dbk2r\" (UID: \"e919712a-6b32-430e-805d-85471f433257\") " pod="kube-system/kube-proxy-dbk2r" Oct 8 20:40:28.473572 systemd[1]: Created slice kubepods-besteffort-pode74ab749_1099_41e4_bf35_23cf28ae4468.slice - libcontainer container kubepods-besteffort-pode74ab749_1099_41e4_bf35_23cf28ae4468.slice. Oct 8 20:40:28.538566 kubelet[2732]: I1008 20:40:28.538441 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npd8s\" (UniqueName: \"kubernetes.io/projected/e74ab749-1099-41e4-bf35-23cf28ae4468-kube-api-access-npd8s\") pod \"tigera-operator-55748b469f-kdslf\" (UID: \"e74ab749-1099-41e4-bf35-23cf28ae4468\") " pod="tigera-operator/tigera-operator-55748b469f-kdslf" Oct 8 20:40:28.538566 kubelet[2732]: I1008 20:40:28.538482 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e74ab749-1099-41e4-bf35-23cf28ae4468-var-lib-calico\") pod \"tigera-operator-55748b469f-kdslf\" (UID: \"e74ab749-1099-41e4-bf35-23cf28ae4468\") " pod="tigera-operator/tigera-operator-55748b469f-kdslf" Oct 8 20:40:28.604446 containerd[1500]: time="2024-10-08T20:40:28.604355979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbk2r,Uid:e919712a-6b32-430e-805d-85471f433257,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:28.627845 containerd[1500]: time="2024-10-08T20:40:28.627558938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:28.627845 containerd[1500]: time="2024-10-08T20:40:28.627603614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:28.627845 containerd[1500]: time="2024-10-08T20:40:28.627614365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:28.628041 containerd[1500]: time="2024-10-08T20:40:28.627743255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:28.658960 systemd[1]: Started cri-containerd-b2442705f448a672b11cb7c43311d1e9746841125c6d21517eaf8ebb4855399c.scope - libcontainer container b2442705f448a672b11cb7c43311d1e9746841125c6d21517eaf8ebb4855399c. Oct 8 20:40:28.685042 containerd[1500]: time="2024-10-08T20:40:28.685003365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbk2r,Uid:e919712a-6b32-430e-805d-85471f433257,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2442705f448a672b11cb7c43311d1e9746841125c6d21517eaf8ebb4855399c\"" Oct 8 20:40:28.688848 containerd[1500]: time="2024-10-08T20:40:28.688771827Z" level=info msg="CreateContainer within sandbox \"b2442705f448a672b11cb7c43311d1e9746841125c6d21517eaf8ebb4855399c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:40:28.701499 containerd[1500]: time="2024-10-08T20:40:28.701454774Z" level=info msg="CreateContainer within sandbox \"b2442705f448a672b11cb7c43311d1e9746841125c6d21517eaf8ebb4855399c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85f5f89bc941e3318edc23637150ccf46695501d39adc324a7cb19ef826df05a\"" Oct 8 20:40:28.702153 containerd[1500]: time="2024-10-08T20:40:28.702070450Z" level=info msg="StartContainer for \"85f5f89bc941e3318edc23637150ccf46695501d39adc324a7cb19ef826df05a\"" Oct 8 20:40:28.730013 systemd[1]: Started cri-containerd-85f5f89bc941e3318edc23637150ccf46695501d39adc324a7cb19ef826df05a.scope - libcontainer container 85f5f89bc941e3318edc23637150ccf46695501d39adc324a7cb19ef826df05a. Oct 8 20:40:28.759538 containerd[1500]: time="2024-10-08T20:40:28.759408531Z" level=info msg="StartContainer for \"85f5f89bc941e3318edc23637150ccf46695501d39adc324a7cb19ef826df05a\" returns successfully" Oct 8 20:40:28.777693 containerd[1500]: time="2024-10-08T20:40:28.777333313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-kdslf,Uid:e74ab749-1099-41e4-bf35-23cf28ae4468,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:40:28.804480 containerd[1500]: time="2024-10-08T20:40:28.804220700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:28.804480 containerd[1500]: time="2024-10-08T20:40:28.804286488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:28.804480 containerd[1500]: time="2024-10-08T20:40:28.804300075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:28.804480 containerd[1500]: time="2024-10-08T20:40:28.804378207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:28.829868 systemd[1]: Started cri-containerd-27e4ea080f5bf6b24785d16352029218e8884cfa36f3b427c449e25c021cc24c.scope - libcontainer container 27e4ea080f5bf6b24785d16352029218e8884cfa36f3b427c449e25c021cc24c. Oct 8 20:40:28.878761 containerd[1500]: time="2024-10-08T20:40:28.878575469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-kdslf,Uid:e74ab749-1099-41e4-bf35-23cf28ae4468,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27e4ea080f5bf6b24785d16352029218e8884cfa36f3b427c449e25c021cc24c\"" Oct 8 20:40:28.882161 containerd[1500]: time="2024-10-08T20:40:28.882022366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:40:29.542440 kubelet[2732]: I1008 20:40:29.542276 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dbk2r" podStartSLOduration=1.54225707 podStartE2EDuration="1.54225707s" podCreationTimestamp="2024-10-08 20:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:40:29.54191688 +0000 UTC m=+8.128396015" watchObservedRunningTime="2024-10-08 20:40:29.54225707 +0000 UTC m=+8.128736196" Oct 8 20:40:30.715987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914130608.mount: Deactivated successfully. Oct 8 20:40:31.138330 containerd[1500]: time="2024-10-08T20:40:31.138237291Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:31.139250 containerd[1500]: time="2024-10-08T20:40:31.138952788Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136533" Oct 8 20:40:31.139882 containerd[1500]: time="2024-10-08T20:40:31.139794019Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:31.141749 containerd[1500]: time="2024-10-08T20:40:31.141727036Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:31.142482 containerd[1500]: time="2024-10-08T20:40:31.142361697Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.26020564s" Oct 8 20:40:31.142482 containerd[1500]: time="2024-10-08T20:40:31.142393178Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:40:31.145016 containerd[1500]: time="2024-10-08T20:40:31.144967608Z" level=info msg="CreateContainer within sandbox \"27e4ea080f5bf6b24785d16352029218e8884cfa36f3b427c449e25c021cc24c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:40:31.168003 containerd[1500]: time="2024-10-08T20:40:31.167937938Z" level=info msg="CreateContainer within sandbox \"27e4ea080f5bf6b24785d16352029218e8884cfa36f3b427c449e25c021cc24c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b6dacf6a23788c67dea476a43c58ce41743fe117d73b21d21d4964c92d9bf75\"" Oct 8 20:40:31.169155 containerd[1500]: time="2024-10-08T20:40:31.168409282Z" level=info msg="StartContainer for \"2b6dacf6a23788c67dea476a43c58ce41743fe117d73b21d21d4964c92d9bf75\"" Oct 8 20:40:31.200876 systemd[1]: Started cri-containerd-2b6dacf6a23788c67dea476a43c58ce41743fe117d73b21d21d4964c92d9bf75.scope - libcontainer container 2b6dacf6a23788c67dea476a43c58ce41743fe117d73b21d21d4964c92d9bf75. Oct 8 20:40:31.225436 containerd[1500]: time="2024-10-08T20:40:31.225387790Z" level=info msg="StartContainer for \"2b6dacf6a23788c67dea476a43c58ce41743fe117d73b21d21d4964c92d9bf75\" returns successfully" Oct 8 20:40:32.375787 kubelet[2732]: I1008 20:40:32.375238 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-kdslf" podStartSLOduration=2.111934441 podStartE2EDuration="4.375220161s" podCreationTimestamp="2024-10-08 20:40:28 +0000 UTC" firstStartedPulling="2024-10-08 20:40:28.880301163 +0000 UTC m=+7.466780288" lastFinishedPulling="2024-10-08 20:40:31.143586882 +0000 UTC m=+9.730066008" observedRunningTime="2024-10-08 20:40:31.544149988 +0000 UTC m=+10.130629123" watchObservedRunningTime="2024-10-08 20:40:32.375220161 +0000 UTC m=+10.961699286" Oct 8 20:40:34.109306 systemd[1]: Created slice kubepods-besteffort-pod7bf1797d_67db_44aa_a61e_ecf5ceeef3bd.slice - libcontainer container kubepods-besteffort-pod7bf1797d_67db_44aa_a61e_ecf5ceeef3bd.slice. Oct 8 20:40:34.177004 kubelet[2732]: I1008 20:40:34.176961 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7bf1797d-67db-44aa-a61e-ecf5ceeef3bd-typha-certs\") pod \"calico-typha-9757c9454-568mg\" (UID: \"7bf1797d-67db-44aa-a61e-ecf5ceeef3bd\") " pod="calico-system/calico-typha-9757c9454-568mg" Oct 8 20:40:34.177004 kubelet[2732]: I1008 20:40:34.177007 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf1797d-67db-44aa-a61e-ecf5ceeef3bd-tigera-ca-bundle\") pod \"calico-typha-9757c9454-568mg\" (UID: \"7bf1797d-67db-44aa-a61e-ecf5ceeef3bd\") " pod="calico-system/calico-typha-9757c9454-568mg" Oct 8 20:40:34.177462 kubelet[2732]: I1008 20:40:34.177027 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm5x\" (UniqueName: \"kubernetes.io/projected/7bf1797d-67db-44aa-a61e-ecf5ceeef3bd-kube-api-access-qwm5x\") pod \"calico-typha-9757c9454-568mg\" (UID: \"7bf1797d-67db-44aa-a61e-ecf5ceeef3bd\") " pod="calico-system/calico-typha-9757c9454-568mg" Oct 8 20:40:34.206200 systemd[1]: Created slice kubepods-besteffort-podc96ddf18_286a_4f67_89dd_c0475221c34c.slice - libcontainer container kubepods-besteffort-podc96ddf18_286a_4f67_89dd_c0475221c34c.slice. Oct 8 20:40:34.277502 kubelet[2732]: I1008 20:40:34.277392 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-policysync\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277502 kubelet[2732]: I1008 20:40:34.277429 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-cni-net-dir\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277502 kubelet[2732]: I1008 20:40:34.277446 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-lib-modules\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277502 kubelet[2732]: I1008 20:40:34.277459 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c96ddf18-286a-4f67-89dd-c0475221c34c-node-certs\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277502 kubelet[2732]: I1008 20:40:34.277473 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-var-lib-calico\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277818 kubelet[2732]: I1008 20:40:34.277508 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-cni-bin-dir\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277818 kubelet[2732]: I1008 20:40:34.277528 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-xtables-lock\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277818 kubelet[2732]: I1008 20:40:34.277542 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-var-run-calico\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277818 kubelet[2732]: I1008 20:40:34.277557 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96ddf18-286a-4f67-89dd-c0475221c34c-tigera-ca-bundle\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277818 kubelet[2732]: I1008 20:40:34.277575 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-flexvol-driver-host\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277925 kubelet[2732]: I1008 20:40:34.277589 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tf2x\" (UniqueName: \"kubernetes.io/projected/c96ddf18-286a-4f67-89dd-c0475221c34c-kube-api-access-8tf2x\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.277925 kubelet[2732]: I1008 20:40:34.277601 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c96ddf18-286a-4f67-89dd-c0475221c34c-cni-log-dir\") pod \"calico-node-9gfkm\" (UID: \"c96ddf18-286a-4f67-89dd-c0475221c34c\") " pod="calico-system/calico-node-9gfkm" Oct 8 20:40:34.307681 kubelet[2732]: E1008 20:40:34.307598 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:34.378643 kubelet[2732]: I1008 20:40:34.378468 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/25d59088-7c89-4335-a16e-1df4714e04f3-registration-dir\") pod \"csi-node-driver-296dq\" (UID: \"25d59088-7c89-4335-a16e-1df4714e04f3\") " pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:34.378643 kubelet[2732]: I1008 20:40:34.378507 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjsd\" (UniqueName: \"kubernetes.io/projected/25d59088-7c89-4335-a16e-1df4714e04f3-kube-api-access-7xjsd\") pod \"csi-node-driver-296dq\" (UID: \"25d59088-7c89-4335-a16e-1df4714e04f3\") " pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:34.378643 kubelet[2732]: I1008 20:40:34.378583 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/25d59088-7c89-4335-a16e-1df4714e04f3-varrun\") pod \"csi-node-driver-296dq\" (UID: \"25d59088-7c89-4335-a16e-1df4714e04f3\") " pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:34.378643 kubelet[2732]: I1008 20:40:34.378598 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/25d59088-7c89-4335-a16e-1df4714e04f3-socket-dir\") pod \"csi-node-driver-296dq\" (UID: \"25d59088-7c89-4335-a16e-1df4714e04f3\") " pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:34.378643 kubelet[2732]: I1008 20:40:34.378632 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25d59088-7c89-4335-a16e-1df4714e04f3-kubelet-dir\") pod \"csi-node-driver-296dq\" (UID: \"25d59088-7c89-4335-a16e-1df4714e04f3\") " pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:34.385862 kubelet[2732]: E1008 20:40:34.385827 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.385862 kubelet[2732]: W1008 20:40:34.385845 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.385963 kubelet[2732]: E1008 20:40:34.385883 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.386444 kubelet[2732]: E1008 20:40:34.386239 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.386444 kubelet[2732]: W1008 20:40:34.386266 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.386528 kubelet[2732]: E1008 20:40:34.386462 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.386803 kubelet[2732]: E1008 20:40:34.386782 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.386803 kubelet[2732]: W1008 20:40:34.386813 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.386803 kubelet[2732]: E1008 20:40:34.386824 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.390090 kubelet[2732]: E1008 20:40:34.388911 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.390090 kubelet[2732]: W1008 20:40:34.388933 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.390090 kubelet[2732]: E1008 20:40:34.388944 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.399013 kubelet[2732]: E1008 20:40:34.398939 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.399013 kubelet[2732]: W1008 20:40:34.398954 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.399013 kubelet[2732]: E1008 20:40:34.398968 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.415552 containerd[1500]: time="2024-10-08T20:40:34.415502336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9757c9454-568mg,Uid:7bf1797d-67db-44aa-a61e-ecf5ceeef3bd,Namespace:calico-system,Attempt:0,}" Oct 8 20:40:34.446078 containerd[1500]: time="2024-10-08T20:40:34.445647358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:34.446078 containerd[1500]: time="2024-10-08T20:40:34.445759325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:34.446078 containerd[1500]: time="2024-10-08T20:40:34.445769664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:34.446078 containerd[1500]: time="2024-10-08T20:40:34.446018265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:34.465439 systemd[1]: Started cri-containerd-640e6b40ed608088f6a025f312c10c2d7145796617dda40c81c75af15761f9a0.scope - libcontainer container 640e6b40ed608088f6a025f312c10c2d7145796617dda40c81c75af15761f9a0. Oct 8 20:40:34.479381 kubelet[2732]: E1008 20:40:34.479215 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.481831 kubelet[2732]: W1008 20:40:34.480367 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.481831 kubelet[2732]: E1008 20:40:34.480980 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.481831 kubelet[2732]: E1008 20:40:34.481296 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.481831 kubelet[2732]: W1008 20:40:34.481305 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.481831 kubelet[2732]: E1008 20:40:34.481350 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.481831 kubelet[2732]: E1008 20:40:34.481616 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.481831 kubelet[2732]: W1008 20:40:34.481625 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.481831 kubelet[2732]: E1008 20:40:34.481684 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.482068 kubelet[2732]: E1008 20:40:34.481988 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.482068 kubelet[2732]: W1008 20:40:34.481999 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.482068 kubelet[2732]: E1008 20:40:34.482011 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.482347 kubelet[2732]: E1008 20:40:34.482284 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.482347 kubelet[2732]: W1008 20:40:34.482331 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.482418 kubelet[2732]: E1008 20:40:34.482357 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.482738 kubelet[2732]: E1008 20:40:34.482702 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.482811 kubelet[2732]: W1008 20:40:34.482775 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.482888 kubelet[2732]: E1008 20:40:34.482870 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.483205 kubelet[2732]: E1008 20:40:34.483176 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.483205 kubelet[2732]: W1008 20:40:34.483189 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.483266 kubelet[2732]: E1008 20:40:34.483243 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.483595 kubelet[2732]: E1008 20:40:34.483577 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.483595 kubelet[2732]: W1008 20:40:34.483590 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.483681 kubelet[2732]: E1008 20:40:34.483670 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.484260 kubelet[2732]: E1008 20:40:34.484000 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.484260 kubelet[2732]: W1008 20:40:34.484010 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.484339 kubelet[2732]: E1008 20:40:34.484291 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.484614 kubelet[2732]: E1008 20:40:34.484469 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.484614 kubelet[2732]: W1008 20:40:34.484510 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.484614 kubelet[2732]: E1008 20:40:34.484598 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.485106 kubelet[2732]: E1008 20:40:34.484910 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.485106 kubelet[2732]: W1008 20:40:34.484953 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.485106 kubelet[2732]: E1008 20:40:34.484999 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.485278 kubelet[2732]: E1008 20:40:34.485241 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.485318 kubelet[2732]: W1008 20:40:34.485280 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.485565 kubelet[2732]: E1008 20:40:34.485374 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.485606 kubelet[2732]: E1008 20:40:34.485597 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.485629 kubelet[2732]: W1008 20:40:34.485606 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.485883 kubelet[2732]: E1008 20:40:34.485694 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.485883 kubelet[2732]: E1008 20:40:34.485853 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.485883 kubelet[2732]: W1008 20:40:34.485860 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.486016 kubelet[2732]: E1008 20:40:34.485902 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.486498 kubelet[2732]: E1008 20:40:34.486153 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.486498 kubelet[2732]: W1008 20:40:34.486168 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.486498 kubelet[2732]: E1008 20:40:34.486200 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.486498 kubelet[2732]: E1008 20:40:34.486475 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.486498 kubelet[2732]: W1008 20:40:34.486483 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.486498 kubelet[2732]: E1008 20:40:34.486492 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.487054 kubelet[2732]: E1008 20:40:34.486839 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.487054 kubelet[2732]: W1008 20:40:34.486850 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.487054 kubelet[2732]: E1008 20:40:34.486884 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.487314 kubelet[2732]: E1008 20:40:34.487164 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.487314 kubelet[2732]: W1008 20:40:34.487172 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.487314 kubelet[2732]: E1008 20:40:34.487298 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.487465 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.488873 kubelet[2732]: W1008 20:40:34.487476 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.487494 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.487723 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.488873 kubelet[2732]: W1008 20:40:34.487740 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.487766 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.488005 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.488873 kubelet[2732]: W1008 20:40:34.488013 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.488108 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.488873 kubelet[2732]: E1008 20:40:34.488284 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.489142 kubelet[2732]: W1008 20:40:34.488292 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.489142 kubelet[2732]: E1008 20:40:34.488399 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.489142 kubelet[2732]: E1008 20:40:34.488534 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.489142 kubelet[2732]: W1008 20:40:34.488552 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.489142 kubelet[2732]: E1008 20:40:34.488572 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.489142 kubelet[2732]: E1008 20:40:34.489044 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.489142 kubelet[2732]: W1008 20:40:34.489052 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.489142 kubelet[2732]: E1008 20:40:34.489061 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.489355 kubelet[2732]: E1008 20:40:34.489336 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.489355 kubelet[2732]: W1008 20:40:34.489349 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.489409 kubelet[2732]: E1008 20:40:34.489357 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.499044 kubelet[2732]: E1008 20:40:34.499023 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:34.499044 kubelet[2732]: W1008 20:40:34.499039 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:34.499044 kubelet[2732]: E1008 20:40:34.499049 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:34.511529 containerd[1500]: time="2024-10-08T20:40:34.511492006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gfkm,Uid:c96ddf18-286a-4f67-89dd-c0475221c34c,Namespace:calico-system,Attempt:0,}" Oct 8 20:40:34.537183 containerd[1500]: time="2024-10-08T20:40:34.536597219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9757c9454-568mg,Uid:7bf1797d-67db-44aa-a61e-ecf5ceeef3bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"640e6b40ed608088f6a025f312c10c2d7145796617dda40c81c75af15761f9a0\"" Oct 8 20:40:34.538909 containerd[1500]: time="2024-10-08T20:40:34.538879944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:40:34.551842 containerd[1500]: time="2024-10-08T20:40:34.551257286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:34.551842 containerd[1500]: time="2024-10-08T20:40:34.551308496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:34.551842 containerd[1500]: time="2024-10-08T20:40:34.551322102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:34.551842 containerd[1500]: time="2024-10-08T20:40:34.551388010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:34.578855 systemd[1]: Started cri-containerd-095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc.scope - libcontainer container 095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc. Oct 8 20:40:34.610472 containerd[1500]: time="2024-10-08T20:40:34.610395272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9gfkm,Uid:c96ddf18-286a-4f67-89dd-c0475221c34c,Namespace:calico-system,Attempt:0,} returns sandbox id \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\"" Oct 8 20:40:35.503761 kubelet[2732]: E1008 20:40:35.502985 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:36.979685 containerd[1500]: time="2024-10-08T20:40:36.979626546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:36.982027 containerd[1500]: time="2024-10-08T20:40:36.981282044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:40:36.982661 containerd[1500]: time="2024-10-08T20:40:36.982220727Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:36.985438 containerd[1500]: time="2024-10-08T20:40:36.985256031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:36.986221 containerd[1500]: time="2024-10-08T20:40:36.986184023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.447237079s" Oct 8 20:40:36.986278 containerd[1500]: time="2024-10-08T20:40:36.986225483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:40:36.989175 containerd[1500]: time="2024-10-08T20:40:36.988842418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:40:37.000518 containerd[1500]: time="2024-10-08T20:40:37.000479378Z" level=info msg="CreateContainer within sandbox \"640e6b40ed608088f6a025f312c10c2d7145796617dda40c81c75af15761f9a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:40:37.025106 containerd[1500]: time="2024-10-08T20:40:37.025000277Z" level=info msg="CreateContainer within sandbox \"640e6b40ed608088f6a025f312c10c2d7145796617dda40c81c75af15761f9a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06646682ad19cbaca2ef8e84580e4c445714ad065d8eb0f39885e822404e5e25\"" Oct 8 20:40:37.026386 containerd[1500]: time="2024-10-08T20:40:37.026121402Z" level=info msg="StartContainer for \"06646682ad19cbaca2ef8e84580e4c445714ad065d8eb0f39885e822404e5e25\"" Oct 8 20:40:37.056876 systemd[1]: Started cri-containerd-06646682ad19cbaca2ef8e84580e4c445714ad065d8eb0f39885e822404e5e25.scope - libcontainer container 06646682ad19cbaca2ef8e84580e4c445714ad065d8eb0f39885e822404e5e25. Oct 8 20:40:37.103356 containerd[1500]: time="2024-10-08T20:40:37.103300650Z" level=info msg="StartContainer for \"06646682ad19cbaca2ef8e84580e4c445714ad065d8eb0f39885e822404e5e25\" returns successfully" Oct 8 20:40:37.504331 kubelet[2732]: E1008 20:40:37.503701 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:37.562288 kubelet[2732]: I1008 20:40:37.562228 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9757c9454-568mg" podStartSLOduration=1.113336609 podStartE2EDuration="3.562212997s" podCreationTimestamp="2024-10-08 20:40:34 +0000 UTC" firstStartedPulling="2024-10-08 20:40:34.538254484 +0000 UTC m=+13.124733609" lastFinishedPulling="2024-10-08 20:40:36.987130872 +0000 UTC m=+15.573609997" observedRunningTime="2024-10-08 20:40:37.561474541 +0000 UTC m=+16.147953676" watchObservedRunningTime="2024-10-08 20:40:37.562212997 +0000 UTC m=+16.148692132" Oct 8 20:40:37.595939 kubelet[2732]: E1008 20:40:37.595890 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.595939 kubelet[2732]: W1008 20:40:37.595916 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.595939 kubelet[2732]: E1008 20:40:37.595937 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.596186 kubelet[2732]: E1008 20:40:37.596161 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.596186 kubelet[2732]: W1008 20:40:37.596177 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.596186 kubelet[2732]: E1008 20:40:37.596188 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.596442 kubelet[2732]: E1008 20:40:37.596406 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.596442 kubelet[2732]: W1008 20:40:37.596433 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.596541 kubelet[2732]: E1008 20:40:37.596446 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.596745 kubelet[2732]: E1008 20:40:37.596696 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.596790 kubelet[2732]: W1008 20:40:37.596771 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.596790 kubelet[2732]: E1008 20:40:37.596784 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.597032 kubelet[2732]: E1008 20:40:37.597018 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.597032 kubelet[2732]: W1008 20:40:37.597030 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.597116 kubelet[2732]: E1008 20:40:37.597041 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.597318 kubelet[2732]: E1008 20:40:37.597283 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.597318 kubelet[2732]: W1008 20:40:37.597297 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.597318 kubelet[2732]: E1008 20:40:37.597308 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.597771 kubelet[2732]: E1008 20:40:37.597497 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.597771 kubelet[2732]: W1008 20:40:37.597506 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.597771 kubelet[2732]: E1008 20:40:37.597515 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.597771 kubelet[2732]: E1008 20:40:37.597756 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.597881 kubelet[2732]: W1008 20:40:37.597779 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.597881 kubelet[2732]: E1008 20:40:37.597790 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.598127 kubelet[2732]: E1008 20:40:37.598100 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.598127 kubelet[2732]: W1008 20:40:37.598117 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.598127 kubelet[2732]: E1008 20:40:37.598128 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.598385 kubelet[2732]: E1008 20:40:37.598361 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.598385 kubelet[2732]: W1008 20:40:37.598377 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.598457 kubelet[2732]: E1008 20:40:37.598388 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.598674 kubelet[2732]: E1008 20:40:37.598648 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.598674 kubelet[2732]: W1008 20:40:37.598663 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.598674 kubelet[2732]: E1008 20:40:37.598674 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.598925 kubelet[2732]: E1008 20:40:37.598892 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.598925 kubelet[2732]: W1008 20:40:37.598905 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.598925 kubelet[2732]: E1008 20:40:37.598915 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.599154 kubelet[2732]: E1008 20:40:37.599110 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.599154 kubelet[2732]: W1008 20:40:37.599124 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.599154 kubelet[2732]: E1008 20:40:37.599133 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.599344 kubelet[2732]: E1008 20:40:37.599319 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.599344 kubelet[2732]: W1008 20:40:37.599332 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.599344 kubelet[2732]: E1008 20:40:37.599341 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.599586 kubelet[2732]: E1008 20:40:37.599561 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.599586 kubelet[2732]: W1008 20:40:37.599578 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.599656 kubelet[2732]: E1008 20:40:37.599589 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.606091 kubelet[2732]: E1008 20:40:37.605967 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.606091 kubelet[2732]: W1008 20:40:37.605979 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.606091 kubelet[2732]: E1008 20:40:37.605993 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.606532 kubelet[2732]: E1008 20:40:37.606393 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.606532 kubelet[2732]: W1008 20:40:37.606405 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.606532 kubelet[2732]: E1008 20:40:37.606419 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.607015 kubelet[2732]: E1008 20:40:37.606914 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.607015 kubelet[2732]: W1008 20:40:37.606927 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.607015 kubelet[2732]: E1008 20:40:37.606952 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.607432 kubelet[2732]: E1008 20:40:37.607318 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.607432 kubelet[2732]: W1008 20:40:37.607330 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.607432 kubelet[2732]: E1008 20:40:37.607355 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.607871 kubelet[2732]: E1008 20:40:37.607750 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.607871 kubelet[2732]: W1008 20:40:37.607762 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.607871 kubelet[2732]: E1008 20:40:37.607850 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.608223 kubelet[2732]: E1008 20:40:37.608098 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.608223 kubelet[2732]: W1008 20:40:37.608109 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.608223 kubelet[2732]: E1008 20:40:37.608199 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.608524 kubelet[2732]: E1008 20:40:37.608503 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.608524 kubelet[2732]: W1008 20:40:37.608517 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.608638 kubelet[2732]: E1008 20:40:37.608613 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.610829 kubelet[2732]: E1008 20:40:37.610779 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.610829 kubelet[2732]: W1008 20:40:37.610794 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.610829 kubelet[2732]: E1008 20:40:37.610807 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.611304 kubelet[2732]: E1008 20:40:37.611053 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.611304 kubelet[2732]: W1008 20:40:37.611088 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.611304 kubelet[2732]: E1008 20:40:37.611100 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.611775 kubelet[2732]: E1008 20:40:37.611704 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.612063 kubelet[2732]: W1008 20:40:37.611946 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.612063 kubelet[2732]: E1008 20:40:37.611969 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.612375 kubelet[2732]: E1008 20:40:37.612362 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.612559 kubelet[2732]: W1008 20:40:37.612452 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.612768 kubelet[2732]: E1008 20:40:37.612648 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.613021 kubelet[2732]: E1008 20:40:37.612994 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.613171 kubelet[2732]: W1008 20:40:37.613085 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.613267 kubelet[2732]: E1008 20:40:37.613231 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.613626 kubelet[2732]: E1008 20:40:37.613502 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.613626 kubelet[2732]: W1008 20:40:37.613513 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.613814 kubelet[2732]: E1008 20:40:37.613799 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.614188 kubelet[2732]: E1008 20:40:37.614047 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.614188 kubelet[2732]: W1008 20:40:37.614059 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.614188 kubelet[2732]: E1008 20:40:37.614073 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.614833 kubelet[2732]: E1008 20:40:37.614695 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.614833 kubelet[2732]: W1008 20:40:37.614706 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.614833 kubelet[2732]: E1008 20:40:37.614758 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.615288 kubelet[2732]: E1008 20:40:37.615199 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.615288 kubelet[2732]: W1008 20:40:37.615211 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.615288 kubelet[2732]: E1008 20:40:37.615230 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.615908 kubelet[2732]: E1008 20:40:37.615790 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.615908 kubelet[2732]: W1008 20:40:37.615803 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.615908 kubelet[2732]: E1008 20:40:37.615813 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:37.616233 kubelet[2732]: E1008 20:40:37.616208 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:37.616233 kubelet[2732]: W1008 20:40:37.616223 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:37.616345 kubelet[2732]: E1008 20:40:37.616235 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.553770 kubelet[2732]: I1008 20:40:38.553656 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:40:38.606886 kubelet[2732]: E1008 20:40:38.606851 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.606886 kubelet[2732]: W1008 20:40:38.606874 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.607044 kubelet[2732]: E1008 20:40:38.606925 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.607190 kubelet[2732]: E1008 20:40:38.607170 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.607190 kubelet[2732]: W1008 20:40:38.607184 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.607285 kubelet[2732]: E1008 20:40:38.607200 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.607419 kubelet[2732]: E1008 20:40:38.607398 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.607419 kubelet[2732]: W1008 20:40:38.607413 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.607487 kubelet[2732]: E1008 20:40:38.607425 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.607749 kubelet[2732]: E1008 20:40:38.607626 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.607749 kubelet[2732]: W1008 20:40:38.607638 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.607749 kubelet[2732]: E1008 20:40:38.607647 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.607996 kubelet[2732]: E1008 20:40:38.607906 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.607996 kubelet[2732]: W1008 20:40:38.607916 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.607996 kubelet[2732]: E1008 20:40:38.607927 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.608128 kubelet[2732]: E1008 20:40:38.608113 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.608128 kubelet[2732]: W1008 20:40:38.608124 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.608208 kubelet[2732]: E1008 20:40:38.608132 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.608337 kubelet[2732]: E1008 20:40:38.608317 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.608337 kubelet[2732]: W1008 20:40:38.608331 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.608395 kubelet[2732]: E1008 20:40:38.608343 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.608578 kubelet[2732]: E1008 20:40:38.608557 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.608578 kubelet[2732]: W1008 20:40:38.608569 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.608578 kubelet[2732]: E1008 20:40:38.608578 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.608854 kubelet[2732]: E1008 20:40:38.608838 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.608854 kubelet[2732]: W1008 20:40:38.608846 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.608854 kubelet[2732]: E1008 20:40:38.608855 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.609144 kubelet[2732]: E1008 20:40:38.609124 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.609144 kubelet[2732]: W1008 20:40:38.609138 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.609144 kubelet[2732]: E1008 20:40:38.609149 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.609355 kubelet[2732]: E1008 20:40:38.609342 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.609355 kubelet[2732]: W1008 20:40:38.609352 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.609427 kubelet[2732]: E1008 20:40:38.609361 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.609561 kubelet[2732]: E1008 20:40:38.609541 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.609561 kubelet[2732]: W1008 20:40:38.609555 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.609669 kubelet[2732]: E1008 20:40:38.609566 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.609814 kubelet[2732]: E1008 20:40:38.609799 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.609814 kubelet[2732]: W1008 20:40:38.609811 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.609890 kubelet[2732]: E1008 20:40:38.609820 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.610022 kubelet[2732]: E1008 20:40:38.610002 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.610022 kubelet[2732]: W1008 20:40:38.610016 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.610072 kubelet[2732]: E1008 20:40:38.610027 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.610227 kubelet[2732]: E1008 20:40:38.610216 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.610227 kubelet[2732]: W1008 20:40:38.610224 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.610298 kubelet[2732]: E1008 20:40:38.610232 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.613530 kubelet[2732]: E1008 20:40:38.613515 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.613615 kubelet[2732]: W1008 20:40:38.613593 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.613615 kubelet[2732]: E1008 20:40:38.613612 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.613886 kubelet[2732]: E1008 20:40:38.613865 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.613936 kubelet[2732]: W1008 20:40:38.613883 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.613936 kubelet[2732]: E1008 20:40:38.613913 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.614170 kubelet[2732]: E1008 20:40:38.614150 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.614170 kubelet[2732]: W1008 20:40:38.614164 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.614230 kubelet[2732]: E1008 20:40:38.614179 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.614430 kubelet[2732]: E1008 20:40:38.614412 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.614430 kubelet[2732]: W1008 20:40:38.614424 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.614533 kubelet[2732]: E1008 20:40:38.614438 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.614780 kubelet[2732]: E1008 20:40:38.614657 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.614780 kubelet[2732]: W1008 20:40:38.614668 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.614780 kubelet[2732]: E1008 20:40:38.614682 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.614916 kubelet[2732]: E1008 20:40:38.614901 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.614916 kubelet[2732]: W1008 20:40:38.614912 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.614998 kubelet[2732]: E1008 20:40:38.614925 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.615143 kubelet[2732]: E1008 20:40:38.615123 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.615143 kubelet[2732]: W1008 20:40:38.615136 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.615523 kubelet[2732]: E1008 20:40:38.615236 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.615523 kubelet[2732]: E1008 20:40:38.615446 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.615523 kubelet[2732]: W1008 20:40:38.615454 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.615523 kubelet[2732]: E1008 20:40:38.615493 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.615705 kubelet[2732]: E1008 20:40:38.615690 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.615705 kubelet[2732]: W1008 20:40:38.615700 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.615794 kubelet[2732]: E1008 20:40:38.615739 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.615958 kubelet[2732]: E1008 20:40:38.615937 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.615958 kubelet[2732]: W1008 20:40:38.615952 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.616044 kubelet[2732]: E1008 20:40:38.615968 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.616251 kubelet[2732]: E1008 20:40:38.616224 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.616251 kubelet[2732]: W1008 20:40:38.616242 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.616323 kubelet[2732]: E1008 20:40:38.616263 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.616499 kubelet[2732]: E1008 20:40:38.616480 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.616499 kubelet[2732]: W1008 20:40:38.616491 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.616567 kubelet[2732]: E1008 20:40:38.616512 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.616832 kubelet[2732]: E1008 20:40:38.616782 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.616832 kubelet[2732]: W1008 20:40:38.616792 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.616832 kubelet[2732]: E1008 20:40:38.616807 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.616993 kubelet[2732]: E1008 20:40:38.616981 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.616993 kubelet[2732]: W1008 20:40:38.616991 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.617058 kubelet[2732]: E1008 20:40:38.617012 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.617215 kubelet[2732]: E1008 20:40:38.617195 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.617215 kubelet[2732]: W1008 20:40:38.617209 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.617276 kubelet[2732]: E1008 20:40:38.617225 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.617485 kubelet[2732]: E1008 20:40:38.617459 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.617485 kubelet[2732]: W1008 20:40:38.617472 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.617485 kubelet[2732]: E1008 20:40:38.617481 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.617915 kubelet[2732]: E1008 20:40:38.617895 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.617915 kubelet[2732]: W1008 20:40:38.617909 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.617996 kubelet[2732]: E1008 20:40:38.617925 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.618164 kubelet[2732]: E1008 20:40:38.618135 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:40:38.618164 kubelet[2732]: W1008 20:40:38.618152 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:40:38.618164 kubelet[2732]: E1008 20:40:38.618164 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:40:38.904816 containerd[1500]: time="2024-10-08T20:40:38.904702482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:38.906255 containerd[1500]: time="2024-10-08T20:40:38.906186394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:40:38.908750 containerd[1500]: time="2024-10-08T20:40:38.906795680Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:38.909913 containerd[1500]: time="2024-10-08T20:40:38.909866825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:38.911252 containerd[1500]: time="2024-10-08T20:40:38.910876463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.921998857s" Oct 8 20:40:38.911252 containerd[1500]: time="2024-10-08T20:40:38.910904177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:40:38.913527 containerd[1500]: time="2024-10-08T20:40:38.913500365Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:40:38.928869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843277606.mount: Deactivated successfully. Oct 8 20:40:38.940416 containerd[1500]: time="2024-10-08T20:40:38.940182755Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed\"" Oct 8 20:40:38.941616 containerd[1500]: time="2024-10-08T20:40:38.941436154Z" level=info msg="StartContainer for \"b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed\"" Oct 8 20:40:38.975833 systemd[1]: Started cri-containerd-b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed.scope - libcontainer container b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed. Oct 8 20:40:39.025322 containerd[1500]: time="2024-10-08T20:40:39.025279857Z" level=info msg="StartContainer for \"b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed\" returns successfully" Oct 8 20:40:39.033435 systemd[1]: cri-containerd-b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed.scope: Deactivated successfully. Oct 8 20:40:39.057487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed-rootfs.mount: Deactivated successfully. Oct 8 20:40:39.061941 containerd[1500]: time="2024-10-08T20:40:39.061671247Z" level=info msg="shim disconnected" id=b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed namespace=k8s.io Oct 8 20:40:39.061941 containerd[1500]: time="2024-10-08T20:40:39.061876041Z" level=warning msg="cleaning up after shim disconnected" id=b15fba21cf64e2d119bcc0d557744867ac81d4fd9bb1a56e93e853915be926ed namespace=k8s.io Oct 8 20:40:39.061941 containerd[1500]: time="2024-10-08T20:40:39.061886982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:40:39.079406 containerd[1500]: time="2024-10-08T20:40:39.079294358Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:40:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:40:39.511168 kubelet[2732]: E1008 20:40:39.511127 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:39.568281 containerd[1500]: time="2024-10-08T20:40:39.568242777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:40:41.506041 kubelet[2732]: E1008 20:40:41.505541 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:43.511740 kubelet[2732]: E1008 20:40:43.511586 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:44.336100 containerd[1500]: time="2024-10-08T20:40:44.336047297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:44.337138 containerd[1500]: time="2024-10-08T20:40:44.337086676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:40:44.337827 containerd[1500]: time="2024-10-08T20:40:44.337775010Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:44.339958 containerd[1500]: time="2024-10-08T20:40:44.339888203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:44.340882 containerd[1500]: time="2024-10-08T20:40:44.340448511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.772163521s" Oct 8 20:40:44.340882 containerd[1500]: time="2024-10-08T20:40:44.340501212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:40:44.342969 containerd[1500]: time="2024-10-08T20:40:44.342916476Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:40:44.380933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940935449.mount: Deactivated successfully. Oct 8 20:40:44.395846 containerd[1500]: time="2024-10-08T20:40:44.395802860Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576\"" Oct 8 20:40:44.396857 containerd[1500]: time="2024-10-08T20:40:44.396776603Z" level=info msg="StartContainer for \"c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576\"" Oct 8 20:40:44.462840 systemd[1]: Started cri-containerd-c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576.scope - libcontainer container c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576. Oct 8 20:40:44.497139 containerd[1500]: time="2024-10-08T20:40:44.496986576Z" level=info msg="StartContainer for \"c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576\" returns successfully" Oct 8 20:40:44.901265 systemd[1]: cri-containerd-c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576.scope: Deactivated successfully. Oct 8 20:40:44.929731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576-rootfs.mount: Deactivated successfully. Oct 8 20:40:44.934156 containerd[1500]: time="2024-10-08T20:40:44.933874024Z" level=info msg="shim disconnected" id=c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576 namespace=k8s.io Oct 8 20:40:44.934156 containerd[1500]: time="2024-10-08T20:40:44.933989056Z" level=warning msg="cleaning up after shim disconnected" id=c51d68c86de39275a89abd876af88d0c655ace919373c91a2f25ff35b4f25576 namespace=k8s.io Oct 8 20:40:44.934156 containerd[1500]: time="2024-10-08T20:40:44.933999195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:40:44.939624 kubelet[2732]: I1008 20:40:44.939468 2732 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 20:40:44.982673 systemd[1]: Created slice kubepods-burstable-podaec703d5_39d0_4c7c_8437_95f773d85d2f.slice - libcontainer container kubepods-burstable-podaec703d5_39d0_4c7c_8437_95f773d85d2f.slice. Oct 8 20:40:44.990851 systemd[1]: Created slice kubepods-besteffort-pod93c268dd_337d_4967_a451_4fde3c6432ae.slice - libcontainer container kubepods-besteffort-pod93c268dd_337d_4967_a451_4fde3c6432ae.slice. Oct 8 20:40:44.998154 systemd[1]: Created slice kubepods-burstable-pod072c8458_9a61_4cf6_86b8_63e198a00610.slice - libcontainer container kubepods-burstable-pod072c8458_9a61_4cf6_86b8_63e198a00610.slice. Oct 8 20:40:45.156929 kubelet[2732]: I1008 20:40:45.156776 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aec703d5-39d0-4c7c-8437-95f773d85d2f-config-volume\") pod \"coredns-6f6b679f8f-x2vq2\" (UID: \"aec703d5-39d0-4c7c-8437-95f773d85d2f\") " pod="kube-system/coredns-6f6b679f8f-x2vq2" Oct 8 20:40:45.156929 kubelet[2732]: I1008 20:40:45.156829 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c268dd-337d-4967-a451-4fde3c6432ae-tigera-ca-bundle\") pod \"calico-kube-controllers-5d7bf7d4d9-xdzgz\" (UID: \"93c268dd-337d-4967-a451-4fde3c6432ae\") " pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" Oct 8 20:40:45.156929 kubelet[2732]: I1008 20:40:45.156857 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99l2p\" (UniqueName: \"kubernetes.io/projected/072c8458-9a61-4cf6-86b8-63e198a00610-kube-api-access-99l2p\") pod \"coredns-6f6b679f8f-fphvf\" (UID: \"072c8458-9a61-4cf6-86b8-63e198a00610\") " pod="kube-system/coredns-6f6b679f8f-fphvf" Oct 8 20:40:45.156929 kubelet[2732]: I1008 20:40:45.156876 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8r6l\" (UniqueName: \"kubernetes.io/projected/aec703d5-39d0-4c7c-8437-95f773d85d2f-kube-api-access-k8r6l\") pod \"coredns-6f6b679f8f-x2vq2\" (UID: \"aec703d5-39d0-4c7c-8437-95f773d85d2f\") " pod="kube-system/coredns-6f6b679f8f-x2vq2" Oct 8 20:40:45.156929 kubelet[2732]: I1008 20:40:45.156898 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbt6\" (UniqueName: \"kubernetes.io/projected/93c268dd-337d-4967-a451-4fde3c6432ae-kube-api-access-4rbt6\") pod \"calico-kube-controllers-5d7bf7d4d9-xdzgz\" (UID: \"93c268dd-337d-4967-a451-4fde3c6432ae\") " pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" Oct 8 20:40:45.157612 kubelet[2732]: I1008 20:40:45.156916 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072c8458-9a61-4cf6-86b8-63e198a00610-config-volume\") pod \"coredns-6f6b679f8f-fphvf\" (UID: \"072c8458-9a61-4cf6-86b8-63e198a00610\") " pod="kube-system/coredns-6f6b679f8f-fphvf" Oct 8 20:40:45.289111 containerd[1500]: time="2024-10-08T20:40:45.289047491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x2vq2,Uid:aec703d5-39d0-4c7c-8437-95f773d85d2f,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:45.294921 containerd[1500]: time="2024-10-08T20:40:45.294876433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7bf7d4d9-xdzgz,Uid:93c268dd-337d-4967-a451-4fde3c6432ae,Namespace:calico-system,Attempt:0,}" Oct 8 20:40:45.308635 containerd[1500]: time="2024-10-08T20:40:45.308568178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fphvf,Uid:072c8458-9a61-4cf6-86b8-63e198a00610,Namespace:kube-system,Attempt:0,}" Oct 8 20:40:45.492506 containerd[1500]: time="2024-10-08T20:40:45.492310158Z" level=error msg="Failed to destroy network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.495188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d-shm.mount: Deactivated successfully. Oct 8 20:40:45.497446 containerd[1500]: time="2024-10-08T20:40:45.495761321Z" level=error msg="Failed to destroy network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.502548 containerd[1500]: time="2024-10-08T20:40:45.501680967Z" level=error msg="encountered an error cleaning up failed sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.502548 containerd[1500]: time="2024-10-08T20:40:45.501773256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fphvf,Uid:072c8458-9a61-4cf6-86b8-63e198a00610,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.502548 containerd[1500]: time="2024-10-08T20:40:45.502020830Z" level=error msg="encountered an error cleaning up failed sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.502548 containerd[1500]: time="2024-10-08T20:40:45.502053123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7bf7d4d9-xdzgz,Uid:93c268dd-337d-4967-a451-4fde3c6432ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.503242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5-shm.mount: Deactivated successfully. Oct 8 20:40:45.507948 kubelet[2732]: E1008 20:40:45.507891 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.508823 kubelet[2732]: E1008 20:40:45.507960 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" Oct 8 20:40:45.508823 kubelet[2732]: E1008 20:40:45.507980 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" Oct 8 20:40:45.508823 kubelet[2732]: E1008 20:40:45.508018 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d7bf7d4d9-xdzgz_calico-system(93c268dd-337d-4967-a451-4fde3c6432ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d7bf7d4d9-xdzgz_calico-system(93c268dd-337d-4967-a451-4fde3c6432ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" podUID="93c268dd-337d-4967-a451-4fde3c6432ae" Oct 8 20:40:45.509869 kubelet[2732]: E1008 20:40:45.508244 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.509869 kubelet[2732]: E1008 20:40:45.508262 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fphvf" Oct 8 20:40:45.509869 kubelet[2732]: E1008 20:40:45.508274 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fphvf" Oct 8 20:40:45.509959 kubelet[2732]: E1008 20:40:45.508303 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fphvf_kube-system(072c8458-9a61-4cf6-86b8-63e198a00610)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fphvf_kube-system(072c8458-9a61-4cf6-86b8-63e198a00610)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fphvf" podUID="072c8458-9a61-4cf6-86b8-63e198a00610" Oct 8 20:40:45.511984 containerd[1500]: time="2024-10-08T20:40:45.511745010Z" level=error msg="Failed to destroy network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.513112 containerd[1500]: time="2024-10-08T20:40:45.512901272Z" level=error msg="encountered an error cleaning up failed sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.513112 containerd[1500]: time="2024-10-08T20:40:45.513049738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x2vq2,Uid:aec703d5-39d0-4c7c-8437-95f773d85d2f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.513607 kubelet[2732]: E1008 20:40:45.513247 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.513607 kubelet[2732]: E1008 20:40:45.513409 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x2vq2" Oct 8 20:40:45.513607 kubelet[2732]: E1008 20:40:45.513425 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x2vq2" Oct 8 20:40:45.513689 kubelet[2732]: E1008 20:40:45.513449 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-x2vq2_kube-system(aec703d5-39d0-4c7c-8437-95f773d85d2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-x2vq2_kube-system(aec703d5-39d0-4c7c-8437-95f773d85d2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-x2vq2" podUID="aec703d5-39d0-4c7c-8437-95f773d85d2f" Oct 8 20:40:45.516114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3-shm.mount: Deactivated successfully. Oct 8 20:40:45.518088 systemd[1]: Created slice kubepods-besteffort-pod25d59088_7c89_4335_a16e_1df4714e04f3.slice - libcontainer container kubepods-besteffort-pod25d59088_7c89_4335_a16e_1df4714e04f3.slice. Oct 8 20:40:45.522187 containerd[1500]: time="2024-10-08T20:40:45.521876762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-296dq,Uid:25d59088-7c89-4335-a16e-1df4714e04f3,Namespace:calico-system,Attempt:0,}" Oct 8 20:40:45.580741 containerd[1500]: time="2024-10-08T20:40:45.580677564Z" level=error msg="Failed to destroy network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.581338 containerd[1500]: time="2024-10-08T20:40:45.581019321Z" level=error msg="encountered an error cleaning up failed sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.581338 containerd[1500]: time="2024-10-08T20:40:45.581063506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-296dq,Uid:25d59088-7c89-4335-a16e-1df4714e04f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.581422 kubelet[2732]: E1008 20:40:45.581335 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.581422 kubelet[2732]: E1008 20:40:45.581387 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:45.581484 kubelet[2732]: E1008 20:40:45.581415 2732 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-296dq" Oct 8 20:40:45.581484 kubelet[2732]: E1008 20:40:45.581457 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-296dq_calico-system(25d59088-7c89-4335-a16e-1df4714e04f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-296dq_calico-system(25d59088-7c89-4335-a16e-1df4714e04f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:45.596295 containerd[1500]: time="2024-10-08T20:40:45.595364511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:40:45.596354 kubelet[2732]: I1008 20:40:45.595732 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:45.599784 kubelet[2732]: I1008 20:40:45.599399 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:40:45.600379 containerd[1500]: time="2024-10-08T20:40:45.600152183Z" level=info msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" Oct 8 20:40:45.604733 containerd[1500]: time="2024-10-08T20:40:45.602084636Z" level=info msg="Ensure that sandbox 8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d in task-service has been cleanup successfully" Oct 8 20:40:45.604794 kubelet[2732]: I1008 20:40:45.603664 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:45.605522 kubelet[2732]: I1008 20:40:45.605142 2732 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:40:45.605561 containerd[1500]: time="2024-10-08T20:40:45.605495212Z" level=info msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" Oct 8 20:40:45.605634 containerd[1500]: time="2024-10-08T20:40:45.605614040Z" level=info msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" Oct 8 20:40:45.605846 containerd[1500]: time="2024-10-08T20:40:45.605828452Z" level=info msg="Ensure that sandbox 9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5 in task-service has been cleanup successfully" Oct 8 20:40:45.606173 containerd[1500]: time="2024-10-08T20:40:45.605967119Z" level=info msg="Ensure that sandbox 485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713 in task-service has been cleanup successfully" Oct 8 20:40:45.608689 containerd[1500]: time="2024-10-08T20:40:45.606039748Z" level=info msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" Oct 8 20:40:45.608689 containerd[1500]: time="2024-10-08T20:40:45.608540625Z" level=info msg="Ensure that sandbox da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3 in task-service has been cleanup successfully" Oct 8 20:40:45.677004 containerd[1500]: time="2024-10-08T20:40:45.676940976Z" level=error msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" failed" error="failed to destroy network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.677510 kubelet[2732]: E1008 20:40:45.677324 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:40:45.677510 kubelet[2732]: E1008 20:40:45.677377 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3"} Oct 8 20:40:45.677510 kubelet[2732]: E1008 20:40:45.677431 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aec703d5-39d0-4c7c-8437-95f773d85d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:40:45.677510 kubelet[2732]: E1008 20:40:45.677455 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aec703d5-39d0-4c7c-8437-95f773d85d2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-x2vq2" podUID="aec703d5-39d0-4c7c-8437-95f773d85d2f" Oct 8 20:40:45.680132 containerd[1500]: time="2024-10-08T20:40:45.680108123Z" level=error msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" failed" error="failed to destroy network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.680496 kubelet[2732]: E1008 20:40:45.680334 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:45.680496 kubelet[2732]: E1008 20:40:45.680387 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713"} Oct 8 20:40:45.680496 kubelet[2732]: E1008 20:40:45.680423 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25d59088-7c89-4335-a16e-1df4714e04f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:40:45.680496 kubelet[2732]: E1008 20:40:45.680469 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25d59088-7c89-4335-a16e-1df4714e04f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-296dq" podUID="25d59088-7c89-4335-a16e-1df4714e04f3" Oct 8 20:40:45.682488 containerd[1500]: time="2024-10-08T20:40:45.682466726Z" level=error msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" failed" error="failed to destroy network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.682790 kubelet[2732]: E1008 20:40:45.682654 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:40:45.682790 kubelet[2732]: E1008 20:40:45.682692 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d"} Oct 8 20:40:45.682790 kubelet[2732]: E1008 20:40:45.682733 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93c268dd-337d-4967-a451-4fde3c6432ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:40:45.682790 kubelet[2732]: E1008 20:40:45.682760 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93c268dd-337d-4967-a451-4fde3c6432ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" podUID="93c268dd-337d-4967-a451-4fde3c6432ae" Oct 8 20:40:45.687066 containerd[1500]: time="2024-10-08T20:40:45.687014977Z" level=error msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" failed" error="failed to destroy network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:40:45.687185 kubelet[2732]: E1008 20:40:45.687113 2732 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:45.687185 kubelet[2732]: E1008 20:40:45.687135 2732 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5"} Oct 8 20:40:45.687185 kubelet[2732]: E1008 20:40:45.687154 2732 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"072c8458-9a61-4cf6-86b8-63e198a00610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:40:45.687185 kubelet[2732]: E1008 20:40:45.687170 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"072c8458-9a61-4cf6-86b8-63e198a00610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fphvf" podUID="072c8458-9a61-4cf6-86b8-63e198a00610" Oct 8 20:40:46.379455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713-shm.mount: Deactivated successfully. Oct 8 20:40:54.724474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770809350.mount: Deactivated successfully. Oct 8 20:40:54.852432 containerd[1500]: time="2024-10-08T20:40:54.820519598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:40:54.860162 containerd[1500]: time="2024-10-08T20:40:54.859295666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 9.258609064s" Oct 8 20:40:54.860162 containerd[1500]: time="2024-10-08T20:40:54.859391660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:40:54.860573 containerd[1500]: time="2024-10-08T20:40:54.860549577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:54.876445 containerd[1500]: time="2024-10-08T20:40:54.875955141Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:54.876599 containerd[1500]: time="2024-10-08T20:40:54.876500616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:54.961155 containerd[1500]: time="2024-10-08T20:40:54.960869573Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:40:55.059604 containerd[1500]: time="2024-10-08T20:40:55.059383231Z" level=info msg="CreateContainer within sandbox \"095bd3e5e562c625b2ded2f7ff2d2ab81f0bf473c0245587cc91ad605f6334bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c\"" Oct 8 20:40:55.063518 containerd[1500]: time="2024-10-08T20:40:55.062949175Z" level=info msg="StartContainer for \"42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c\"" Oct 8 20:40:55.228949 systemd[1]: Started cri-containerd-42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c.scope - libcontainer container 42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c. Oct 8 20:40:55.273841 containerd[1500]: time="2024-10-08T20:40:55.273775863Z" level=info msg="StartContainer for \"42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c\" returns successfully" Oct 8 20:40:55.364489 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:40:55.366688 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:40:55.691565 kubelet[2732]: I1008 20:40:55.683604 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9gfkm" podStartSLOduration=1.4020159620000001 podStartE2EDuration="21.667446466s" podCreationTimestamp="2024-10-08 20:40:34 +0000 UTC" firstStartedPulling="2024-10-08 20:40:34.611788196 +0000 UTC m=+13.198267321" lastFinishedPulling="2024-10-08 20:40:54.8772187 +0000 UTC m=+33.463697825" observedRunningTime="2024-10-08 20:40:55.658221271 +0000 UTC m=+34.244700406" watchObservedRunningTime="2024-10-08 20:40:55.667446466 +0000 UTC m=+34.253925591" Oct 8 20:40:57.505113 containerd[1500]: time="2024-10-08T20:40:57.505051548Z" level=info msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" Oct 8 20:40:57.666556 systemd[1]: run-containerd-runc-k8s.io-42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c-runc.IGhs5d.mount: Deactivated successfully. Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.580 [INFO][3891] k8s.go 608: Cleaning up netns ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.583 [INFO][3891] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" iface="eth0" netns="/var/run/netns/cni-42b37466-157a-0091-39ea-a0185376e4a0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.583 [INFO][3891] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" iface="eth0" netns="/var/run/netns/cni-42b37466-157a-0091-39ea-a0185376e4a0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.584 [INFO][3891] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" iface="eth0" netns="/var/run/netns/cni-42b37466-157a-0091-39ea-a0185376e4a0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.584 [INFO][3891] k8s.go 615: Releasing IP address(es) ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.585 [INFO][3891] utils.go 188: Calico CNI releasing IP address ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.722 [INFO][3897] ipam_plugin.go 417: Releasing address using handleID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.723 [INFO][3897] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.723 [INFO][3897] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.731 [WARNING][3897] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.731 [INFO][3897] ipam_plugin.go 445: Releasing address using workloadID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.733 [INFO][3897] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:40:57.737597 containerd[1500]: 2024-10-08 20:40:57.735 [INFO][3891] k8s.go 621: Teardown processing complete. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:40:57.739840 containerd[1500]: time="2024-10-08T20:40:57.739782713Z" level=info msg="TearDown network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" successfully" Oct 8 20:40:57.739840 containerd[1500]: time="2024-10-08T20:40:57.739837007Z" level=info msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" returns successfully" Oct 8 20:40:57.741273 systemd[1]: run-netns-cni\x2d42b37466\x2d157a\x2d0091\x2d39ea\x2da0185376e4a0.mount: Deactivated successfully. Oct 8 20:40:57.741809 containerd[1500]: time="2024-10-08T20:40:57.741492453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-296dq,Uid:25d59088-7c89-4335-a16e-1df4714e04f3,Namespace:calico-system,Attempt:1,}" Oct 8 20:40:57.874532 systemd-networkd[1392]: cali0a84316c805: Link UP Oct 8 20:40:57.877013 systemd-networkd[1392]: cali0a84316c805: Gained carrier Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.780 [INFO][3925] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.792 [INFO][3925] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0 csi-node-driver- calico-system 25d59088-7c89-4335-a16e-1df4714e04f3 693 0 2024-10-08 20:40:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 csi-node-driver-296dq eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0a84316c805 [] []}} ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.792 [INFO][3925] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.821 [INFO][3936] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" HandleID="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.829 [INFO][3936] ipam_plugin.go 270: Auto assigning IP ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" HandleID="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ede10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-a-d0274495d1", "pod":"csi-node-driver-296dq", "timestamp":"2024-10-08 20:40:57.821171489 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.829 [INFO][3936] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.829 [INFO][3936] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.829 [INFO][3936] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.831 [INFO][3936] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.838 [INFO][3936] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.842 [INFO][3936] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.844 [INFO][3936] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.845 [INFO][3936] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.845 [INFO][3936] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.848 [INFO][3936] ipam.go 1685: Creating new handle: k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55 Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.854 [INFO][3936] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.857 [INFO][3936] ipam.go 1216: Successfully claimed IPs: [192.168.11.129/26] block=192.168.11.128/26 handle="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.857 [INFO][3936] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.129/26] handle="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.857 [INFO][3936] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:40:57.894996 containerd[1500]: 2024-10-08 20:40:57.857 [INFO][3936] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.129/26] IPv6=[] ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" HandleID="k8s-pod-network.749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.861 [INFO][3925] k8s.go 386: Populated endpoint ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25d59088-7c89-4335-a16e-1df4714e04f3", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"csi-node-driver-296dq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0a84316c805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.861 [INFO][3925] k8s.go 387: Calico CNI using IPs: [192.168.11.129/32] ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.861 [INFO][3925] dataplane_linux.go 68: Setting the host side veth name to cali0a84316c805 ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.871 [INFO][3925] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.872 [INFO][3925] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25d59088-7c89-4335-a16e-1df4714e04f3", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55", Pod:"csi-node-driver-296dq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0a84316c805", MAC:"0e:aa:c8:f8:23:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:40:57.896693 containerd[1500]: 2024-10-08 20:40:57.891 [INFO][3925] k8s.go 500: Wrote updated endpoint to datastore ContainerID="749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55" Namespace="calico-system" Pod="csi-node-driver-296dq" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:40:57.925703 containerd[1500]: time="2024-10-08T20:40:57.925582826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:57.925703 containerd[1500]: time="2024-10-08T20:40:57.925632752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:57.925703 containerd[1500]: time="2024-10-08T20:40:57.925643292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:57.925969 containerd[1500]: time="2024-10-08T20:40:57.925730188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:57.948952 systemd[1]: Started cri-containerd-749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55.scope - libcontainer container 749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55. Oct 8 20:40:57.985467 containerd[1500]: time="2024-10-08T20:40:57.985436086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-296dq,Uid:25d59088-7c89-4335-a16e-1df4714e04f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55\"" Oct 8 20:40:57.994463 containerd[1500]: time="2024-10-08T20:40:57.994428164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:40:58.505151 containerd[1500]: time="2024-10-08T20:40:58.504240063Z" level=info msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.553 [INFO][4030] k8s.go 608: Cleaning up netns ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.554 [INFO][4030] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" iface="eth0" netns="/var/run/netns/cni-d99e6ec3-ea9e-d3a9-dcd9-d170c489a8ef" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.554 [INFO][4030] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" iface="eth0" netns="/var/run/netns/cni-d99e6ec3-ea9e-d3a9-dcd9-d170c489a8ef" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.556 [INFO][4030] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" iface="eth0" netns="/var/run/netns/cni-d99e6ec3-ea9e-d3a9-dcd9-d170c489a8ef" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.557 [INFO][4030] k8s.go 615: Releasing IP address(es) ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.557 [INFO][4030] utils.go 188: Calico CNI releasing IP address ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.577 [INFO][4036] ipam_plugin.go 417: Releasing address using handleID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.577 [INFO][4036] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.577 [INFO][4036] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.582 [WARNING][4036] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.582 [INFO][4036] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.583 [INFO][4036] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:40:58.588597 containerd[1500]: 2024-10-08 20:40:58.585 [INFO][4030] k8s.go 621: Teardown processing complete. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:40:58.588597 containerd[1500]: time="2024-10-08T20:40:58.588345822Z" level=info msg="TearDown network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" successfully" Oct 8 20:40:58.588597 containerd[1500]: time="2024-10-08T20:40:58.588423240Z" level=info msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" returns successfully" Oct 8 20:40:58.589991 containerd[1500]: time="2024-10-08T20:40:58.589170109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fphvf,Uid:072c8458-9a61-4cf6-86b8-63e198a00610,Namespace:kube-system,Attempt:1,}" Oct 8 20:40:58.663500 systemd[1]: run-netns-cni\x2dd99e6ec3\x2dea9e\x2dd3a9\x2ddcd9\x2dd170c489a8ef.mount: Deactivated successfully. Oct 8 20:40:58.698510 systemd-networkd[1392]: calidcdde45b865: Link UP Oct 8 20:40:58.699422 systemd-networkd[1392]: calidcdde45b865: Gained carrier Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.615 [INFO][4042] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.623 [INFO][4042] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0 coredns-6f6b679f8f- kube-system 072c8458-9a61-4cf6-86b8-63e198a00610 701 0 2024-10-08 20:40:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 coredns-6f6b679f8f-fphvf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcdde45b865 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.623 [INFO][4042] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.657 [INFO][4054] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" HandleID="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.670 [INFO][4054] ipam_plugin.go 270: Auto assigning IP ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" HandleID="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-a-d0274495d1", "pod":"coredns-6f6b679f8f-fphvf", "timestamp":"2024-10-08 20:40:58.657168874 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.670 [INFO][4054] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.670 [INFO][4054] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.670 [INFO][4054] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.671 [INFO][4054] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.674 [INFO][4054] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.678 [INFO][4054] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.679 [INFO][4054] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.682 [INFO][4054] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.682 [INFO][4054] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.683 [INFO][4054] ipam.go 1685: Creating new handle: k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9 Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.687 [INFO][4054] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.691 [INFO][4054] ipam.go 1216: Successfully claimed IPs: [192.168.11.130/26] block=192.168.11.128/26 handle="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.691 [INFO][4054] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.130/26] handle="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.691 [INFO][4054] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:40:58.713211 containerd[1500]: 2024-10-08 20:40:58.691 [INFO][4054] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.130/26] IPv6=[] ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" HandleID="k8s-pod-network.eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.694 [INFO][4042] k8s.go 386: Populated endpoint ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"072c8458-9a61-4cf6-86b8-63e198a00610", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"coredns-6f6b679f8f-fphvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcdde45b865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.694 [INFO][4042] k8s.go 387: Calico CNI using IPs: [192.168.11.130/32] ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.694 [INFO][4042] dataplane_linux.go 68: Setting the host side veth name to calidcdde45b865 ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.699 [INFO][4042] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.699 [INFO][4042] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"072c8458-9a61-4cf6-86b8-63e198a00610", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9", Pod:"coredns-6f6b679f8f-fphvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcdde45b865", MAC:"16:8a:b5:6a:24:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:40:58.714085 containerd[1500]: 2024-10-08 20:40:58.710 [INFO][4042] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9" Namespace="kube-system" Pod="coredns-6f6b679f8f-fphvf" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:40:58.735841 containerd[1500]: time="2024-10-08T20:40:58.735551851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:40:58.735841 containerd[1500]: time="2024-10-08T20:40:58.735607117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:40:58.735841 containerd[1500]: time="2024-10-08T20:40:58.735620543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:58.735841 containerd[1500]: time="2024-10-08T20:40:58.735767654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:40:58.761858 systemd[1]: Started cri-containerd-eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9.scope - libcontainer container eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9. Oct 8 20:40:58.802544 containerd[1500]: time="2024-10-08T20:40:58.802498255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fphvf,Uid:072c8458-9a61-4cf6-86b8-63e198a00610,Namespace:kube-system,Attempt:1,} returns sandbox id \"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9\"" Oct 8 20:40:58.805191 containerd[1500]: time="2024-10-08T20:40:58.805088055Z" level=info msg="CreateContainer within sandbox \"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:40:58.825814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036304859.mount: Deactivated successfully. Oct 8 20:40:58.826501 containerd[1500]: time="2024-10-08T20:40:58.826468082Z" level=info msg="CreateContainer within sandbox \"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd52eb848aa79f6bcb427314b868dfdfc20777f8e911a323a47bb6d81746432f\"" Oct 8 20:40:58.828358 containerd[1500]: time="2024-10-08T20:40:58.827496488Z" level=info msg="StartContainer for \"dd52eb848aa79f6bcb427314b868dfdfc20777f8e911a323a47bb6d81746432f\"" Oct 8 20:40:58.852853 systemd[1]: Started cri-containerd-dd52eb848aa79f6bcb427314b868dfdfc20777f8e911a323a47bb6d81746432f.scope - libcontainer container dd52eb848aa79f6bcb427314b868dfdfc20777f8e911a323a47bb6d81746432f. Oct 8 20:40:58.881287 containerd[1500]: time="2024-10-08T20:40:58.881203349Z" level=info msg="StartContainer for \"dd52eb848aa79f6bcb427314b868dfdfc20777f8e911a323a47bb6d81746432f\" returns successfully" Oct 8 20:40:59.689933 kubelet[2732]: I1008 20:40:59.689430 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fphvf" podStartSLOduration=31.689412317 podStartE2EDuration="31.689412317s" podCreationTimestamp="2024-10-08 20:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:40:59.676660434 +0000 UTC m=+38.263139560" watchObservedRunningTime="2024-10-08 20:40:59.689412317 +0000 UTC m=+38.275891441" Oct 8 20:40:59.690603 containerd[1500]: time="2024-10-08T20:40:59.689632918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:59.692802 containerd[1500]: time="2024-10-08T20:40:59.692662518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:40:59.693656 containerd[1500]: time="2024-10-08T20:40:59.693398144Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:59.697972 containerd[1500]: time="2024-10-08T20:40:59.697553086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:40:59.699763 containerd[1500]: time="2024-10-08T20:40:59.699547648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.70506021s" Oct 8 20:40:59.699763 containerd[1500]: time="2024-10-08T20:40:59.699679730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:40:59.703554 containerd[1500]: time="2024-10-08T20:40:59.703335417Z" level=info msg="CreateContainer within sandbox \"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:40:59.722263 containerd[1500]: time="2024-10-08T20:40:59.721677293Z" level=info msg="CreateContainer within sandbox \"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8d634a412fe3066c7479f8a7032294f44af8374f9340e6364284bc7efcaa803d\"" Oct 8 20:40:59.723082 containerd[1500]: time="2024-10-08T20:40:59.722699828Z" level=info msg="StartContainer for \"8d634a412fe3066c7479f8a7032294f44af8374f9340e6364284bc7efcaa803d\"" Oct 8 20:40:59.761183 systemd[1]: Started cri-containerd-8d634a412fe3066c7479f8a7032294f44af8374f9340e6364284bc7efcaa803d.scope - libcontainer container 8d634a412fe3066c7479f8a7032294f44af8374f9340e6364284bc7efcaa803d. Oct 8 20:40:59.769020 systemd-networkd[1392]: calidcdde45b865: Gained IPv6LL Oct 8 20:40:59.795824 containerd[1500]: time="2024-10-08T20:40:59.795791935Z" level=info msg="StartContainer for \"8d634a412fe3066c7479f8a7032294f44af8374f9340e6364284bc7efcaa803d\" returns successfully" Oct 8 20:40:59.798791 containerd[1500]: time="2024-10-08T20:40:59.798522424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:40:59.890887 systemd-networkd[1392]: cali0a84316c805: Gained IPv6LL Oct 8 20:41:00.504287 containerd[1500]: time="2024-10-08T20:41:00.503952061Z" level=info msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" Oct 8 20:41:00.504287 containerd[1500]: time="2024-10-08T20:41:00.504030551Z" level=info msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] k8s.go 608: Cleaning up netns ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" iface="eth0" netns="/var/run/netns/cni-4b1d18e0-a732-b340-44cc-780b544b2a3f" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" iface="eth0" netns="/var/run/netns/cni-4b1d18e0-a732-b340-44cc-780b544b2a3f" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" iface="eth0" netns="/var/run/netns/cni-4b1d18e0-a732-b340-44cc-780b544b2a3f" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] k8s.go 615: Releasing IP address(es) ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.555 [INFO][4256] utils.go 188: Calico CNI releasing IP address ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.579 [INFO][4276] ipam_plugin.go 417: Releasing address using handleID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.580 [INFO][4276] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.580 [INFO][4276] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.585 [WARNING][4276] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.585 [INFO][4276] ipam_plugin.go 445: Releasing address using workloadID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.587 [INFO][4276] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:00.592498 containerd[1500]: 2024-10-08 20:41:00.589 [INFO][4256] k8s.go 621: Teardown processing complete. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:00.594130 containerd[1500]: time="2024-10-08T20:41:00.594082510Z" level=info msg="TearDown network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" successfully" Oct 8 20:41:00.594130 containerd[1500]: time="2024-10-08T20:41:00.594125030Z" level=info msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" returns successfully" Oct 8 20:41:00.599155 containerd[1500]: time="2024-10-08T20:41:00.598437390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x2vq2,Uid:aec703d5-39d0-4c7c-8437-95f773d85d2f,Namespace:kube-system,Attempt:1,}" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.563 [INFO][4267] k8s.go 608: Cleaning up netns ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.563 [INFO][4267] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" iface="eth0" netns="/var/run/netns/cni-c75ad68d-92d8-ee04-2a0a-575f86a1da02" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.563 [INFO][4267] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" iface="eth0" netns="/var/run/netns/cni-c75ad68d-92d8-ee04-2a0a-575f86a1da02" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.564 [INFO][4267] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" iface="eth0" netns="/var/run/netns/cni-c75ad68d-92d8-ee04-2a0a-575f86a1da02" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.564 [INFO][4267] k8s.go 615: Releasing IP address(es) ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.564 [INFO][4267] utils.go 188: Calico CNI releasing IP address ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.587 [INFO][4280] ipam_plugin.go 417: Releasing address using handleID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.588 [INFO][4280] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.588 [INFO][4280] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.594 [WARNING][4280] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.594 [INFO][4280] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.599 [INFO][4280] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:00.606779 containerd[1500]: 2024-10-08 20:41:00.602 [INFO][4267] k8s.go 621: Teardown processing complete. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:00.607237 containerd[1500]: time="2024-10-08T20:41:00.606947891Z" level=info msg="TearDown network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" successfully" Oct 8 20:41:00.607237 containerd[1500]: time="2024-10-08T20:41:00.606964783Z" level=info msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" returns successfully" Oct 8 20:41:00.607593 containerd[1500]: time="2024-10-08T20:41:00.607560361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7bf7d4d9-xdzgz,Uid:93c268dd-337d-4967-a451-4fde3c6432ae,Namespace:calico-system,Attempt:1,}" Oct 8 20:41:00.668131 systemd[1]: run-netns-cni\x2dc75ad68d\x2d92d8\x2dee04\x2d2a0a\x2d575f86a1da02.mount: Deactivated successfully. Oct 8 20:41:00.668284 systemd[1]: run-netns-cni\x2d4b1d18e0\x2da732\x2db340\x2d44cc\x2d780b544b2a3f.mount: Deactivated successfully. Oct 8 20:41:00.793932 systemd-networkd[1392]: cali6c7eecc3f5b: Link UP Oct 8 20:41:00.795348 systemd-networkd[1392]: cali6c7eecc3f5b: Gained carrier Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.678 [INFO][4290] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.709 [INFO][4290] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0 coredns-6f6b679f8f- kube-system aec703d5-39d0-4c7c-8437-95f773d85d2f 727 0 2024-10-08 20:40:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 coredns-6f6b679f8f-x2vq2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c7eecc3f5b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.709 [INFO][4290] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.739 [INFO][4314] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" HandleID="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.753 [INFO][4314] ipam_plugin.go 270: Auto assigning IP ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" HandleID="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd430), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-a-d0274495d1", "pod":"coredns-6f6b679f8f-x2vq2", "timestamp":"2024-10-08 20:41:00.739298718 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.753 [INFO][4314] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.753 [INFO][4314] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.753 [INFO][4314] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.757 [INFO][4314] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.765 [INFO][4314] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.774 [INFO][4314] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.775 [INFO][4314] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.778 [INFO][4314] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.778 [INFO][4314] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.779 [INFO][4314] ipam.go 1685: Creating new handle: k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039 Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.783 [INFO][4314] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4314] ipam.go 1216: Successfully claimed IPs: [192.168.11.131/26] block=192.168.11.128/26 handle="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4314] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.131/26] handle="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4314] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:00.807414 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4314] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.131/26] IPv6=[] ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" HandleID="k8s-pod-network.5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.791 [INFO][4290] k8s.go 386: Populated endpoint ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aec703d5-39d0-4c7c-8437-95f773d85d2f", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"coredns-6f6b679f8f-x2vq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c7eecc3f5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.791 [INFO][4290] k8s.go 387: Calico CNI using IPs: [192.168.11.131/32] ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.791 [INFO][4290] dataplane_linux.go 68: Setting the host side veth name to cali6c7eecc3f5b ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.794 [INFO][4290] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.795 [INFO][4290] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aec703d5-39d0-4c7c-8437-95f773d85d2f", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039", Pod:"coredns-6f6b679f8f-x2vq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c7eecc3f5b", MAC:"c2:be:ca:3d:9d:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:00.808291 containerd[1500]: 2024-10-08 20:41:00.801 [INFO][4290] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039" Namespace="kube-system" Pod="coredns-6f6b679f8f-x2vq2" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:00.831347 containerd[1500]: time="2024-10-08T20:41:00.831097875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:41:00.831347 containerd[1500]: time="2024-10-08T20:41:00.831142380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:41:00.831347 containerd[1500]: time="2024-10-08T20:41:00.831152359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:00.831347 containerd[1500]: time="2024-10-08T20:41:00.831213305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:00.855836 systemd[1]: Started cri-containerd-5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039.scope - libcontainer container 5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039. Oct 8 20:41:00.894533 systemd-networkd[1392]: cali72e14e4c1de: Link UP Oct 8 20:41:00.894891 systemd-networkd[1392]: cali72e14e4c1de: Gained carrier Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.712 [INFO][4301] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.726 [INFO][4301] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0 calico-kube-controllers-5d7bf7d4d9- calico-system 93c268dd-337d-4967-a451-4fde3c6432ae 728 0 2024-10-08 20:40:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d7bf7d4d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 calico-kube-controllers-5d7bf7d4d9-xdzgz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali72e14e4c1de [] []}} ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.727 [INFO][4301] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.763 [INFO][4319] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" HandleID="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.774 [INFO][4319] ipam_plugin.go 270: Auto assigning IP ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" HandleID="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-a-d0274495d1", "pod":"calico-kube-controllers-5d7bf7d4d9-xdzgz", "timestamp":"2024-10-08 20:41:00.763680143 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.774 [INFO][4319] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4319] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.787 [INFO][4319] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.858 [INFO][4319] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.864 [INFO][4319] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.873 [INFO][4319] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.875 [INFO][4319] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.877 [INFO][4319] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.877 [INFO][4319] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.879 [INFO][4319] ipam.go 1685: Creating new handle: k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.882 [INFO][4319] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.887 [INFO][4319] ipam.go 1216: Successfully claimed IPs: [192.168.11.132/26] block=192.168.11.128/26 handle="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.887 [INFO][4319] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.132/26] handle="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.887 [INFO][4319] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:00.915425 containerd[1500]: 2024-10-08 20:41:00.887 [INFO][4319] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.132/26] IPv6=[] ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" HandleID="k8s-pod-network.172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.891 [INFO][4301] k8s.go 386: Populated endpoint ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0", GenerateName:"calico-kube-controllers-5d7bf7d4d9-", Namespace:"calico-system", SelfLink:"", UID:"93c268dd-337d-4967-a451-4fde3c6432ae", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7bf7d4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"calico-kube-controllers-5d7bf7d4d9-xdzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72e14e4c1de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.891 [INFO][4301] k8s.go 387: Calico CNI using IPs: [192.168.11.132/32] ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.891 [INFO][4301] dataplane_linux.go 68: Setting the host side veth name to cali72e14e4c1de ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.894 [INFO][4301] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.894 [INFO][4301] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0", GenerateName:"calico-kube-controllers-5d7bf7d4d9-", Namespace:"calico-system", SelfLink:"", UID:"93c268dd-337d-4967-a451-4fde3c6432ae", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7bf7d4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d", Pod:"calico-kube-controllers-5d7bf7d4d9-xdzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72e14e4c1de", MAC:"a6:97:ba:c4:05:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:00.916873 containerd[1500]: 2024-10-08 20:41:00.908 [INFO][4301] k8s.go 500: Wrote updated endpoint to datastore ContainerID="172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d" Namespace="calico-system" Pod="calico-kube-controllers-5d7bf7d4d9-xdzgz" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:00.921398 containerd[1500]: time="2024-10-08T20:41:00.921361027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x2vq2,Uid:aec703d5-39d0-4c7c-8437-95f773d85d2f,Namespace:kube-system,Attempt:1,} returns sandbox id \"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039\"" Oct 8 20:41:00.926667 containerd[1500]: time="2024-10-08T20:41:00.926634173Z" level=info msg="CreateContainer within sandbox \"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:41:00.949366 containerd[1500]: time="2024-10-08T20:41:00.949250595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:41:00.949366 containerd[1500]: time="2024-10-08T20:41:00.949330617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:41:00.949366 containerd[1500]: time="2024-10-08T20:41:00.949344945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:00.949626 containerd[1500]: time="2024-10-08T20:41:00.949447311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:00.952501 containerd[1500]: time="2024-10-08T20:41:00.952462702Z" level=info msg="CreateContainer within sandbox \"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46518272920e549c224d9ecefb434fe13b91a224111c345d0fe2c8f76a9ce875\"" Oct 8 20:41:00.954567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298187117.mount: Deactivated successfully. Oct 8 20:41:00.954908 containerd[1500]: time="2024-10-08T20:41:00.954877045Z" level=info msg="StartContainer for \"46518272920e549c224d9ecefb434fe13b91a224111c345d0fe2c8f76a9ce875\"" Oct 8 20:41:00.974929 systemd[1]: Started cri-containerd-172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d.scope - libcontainer container 172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d. Oct 8 20:41:00.988874 systemd[1]: Started cri-containerd-46518272920e549c224d9ecefb434fe13b91a224111c345d0fe2c8f76a9ce875.scope - libcontainer container 46518272920e549c224d9ecefb434fe13b91a224111c345d0fe2c8f76a9ce875. Oct 8 20:41:01.023325 containerd[1500]: time="2024-10-08T20:41:01.023227977Z" level=info msg="StartContainer for \"46518272920e549c224d9ecefb434fe13b91a224111c345d0fe2c8f76a9ce875\" returns successfully" Oct 8 20:41:01.057406 containerd[1500]: time="2024-10-08T20:41:01.057187523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d7bf7d4d9-xdzgz,Uid:93c268dd-337d-4967-a451-4fde3c6432ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d\"" Oct 8 20:41:01.720573 kubelet[2732]: I1008 20:41:01.720234 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-x2vq2" podStartSLOduration=33.720212435 podStartE2EDuration="33.720212435s" podCreationTimestamp="2024-10-08 20:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:41:01.707090781 +0000 UTC m=+40.293569907" watchObservedRunningTime="2024-10-08 20:41:01.720212435 +0000 UTC m=+40.306691560" Oct 8 20:41:02.067095 systemd-networkd[1392]: cali6c7eecc3f5b: Gained IPv6LL Oct 8 20:41:02.193925 systemd-networkd[1392]: cali72e14e4c1de: Gained IPv6LL Oct 8 20:41:02.198830 containerd[1500]: time="2024-10-08T20:41:02.198790197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:02.199811 containerd[1500]: time="2024-10-08T20:41:02.199735673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:41:02.200996 containerd[1500]: time="2024-10-08T20:41:02.200946666Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:02.202614 containerd[1500]: time="2024-10-08T20:41:02.202570426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:02.203220 containerd[1500]: time="2024-10-08T20:41:02.203063548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.404506837s" Oct 8 20:41:02.203220 containerd[1500]: time="2024-10-08T20:41:02.203100077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:41:02.205698 containerd[1500]: time="2024-10-08T20:41:02.205660727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:41:02.206336 containerd[1500]: time="2024-10-08T20:41:02.206278797Z" level=info msg="CreateContainer within sandbox \"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:41:02.221450 containerd[1500]: time="2024-10-08T20:41:02.221411223Z" level=info msg="CreateContainer within sandbox \"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1c289a8c4100497c497ea7601462212f8f0a52ae1a817bf3ceb6c80ed16a9d72\"" Oct 8 20:41:02.222291 containerd[1500]: time="2024-10-08T20:41:02.221983846Z" level=info msg="StartContainer for \"1c289a8c4100497c497ea7601462212f8f0a52ae1a817bf3ceb6c80ed16a9d72\"" Oct 8 20:41:02.265113 systemd[1]: Started cri-containerd-1c289a8c4100497c497ea7601462212f8f0a52ae1a817bf3ceb6c80ed16a9d72.scope - libcontainer container 1c289a8c4100497c497ea7601462212f8f0a52ae1a817bf3ceb6c80ed16a9d72. Oct 8 20:41:02.319084 containerd[1500]: time="2024-10-08T20:41:02.318476471Z" level=info msg="StartContainer for \"1c289a8c4100497c497ea7601462212f8f0a52ae1a817bf3ceb6c80ed16a9d72\" returns successfully" Oct 8 20:41:02.654571 kubelet[2732]: I1008 20:41:02.654504 2732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:41:02.655273 kubelet[2732]: I1008 20:41:02.655237 2732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:41:02.717275 kubelet[2732]: I1008 20:41:02.717205 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-296dq" podStartSLOduration=24.500037975 podStartE2EDuration="28.717184844s" podCreationTimestamp="2024-10-08 20:40:34 +0000 UTC" firstStartedPulling="2024-10-08 20:40:57.986845913 +0000 UTC m=+36.573325038" lastFinishedPulling="2024-10-08 20:41:02.203992782 +0000 UTC m=+40.790471907" observedRunningTime="2024-10-08 20:41:02.716403001 +0000 UTC m=+41.302882136" watchObservedRunningTime="2024-10-08 20:41:02.717184844 +0000 UTC m=+41.303663989" Oct 8 20:41:03.425447 kubelet[2732]: I1008 20:41:03.425383 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:04.301743 kernel: bpftool[4603]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:41:05.093760 containerd[1500]: time="2024-10-08T20:41:05.093617933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:05.095914 containerd[1500]: time="2024-10-08T20:41:05.095362120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:41:05.098656 containerd[1500]: time="2024-10-08T20:41:05.097095297Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:05.102877 containerd[1500]: time="2024-10-08T20:41:05.102819740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:05.104480 containerd[1500]: time="2024-10-08T20:41:05.104095584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.89837852s" Oct 8 20:41:05.105362 containerd[1500]: time="2024-10-08T20:41:05.105337052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:41:05.147581 containerd[1500]: time="2024-10-08T20:41:05.147498584Z" level=info msg="CreateContainer within sandbox \"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:41:05.177461 containerd[1500]: time="2024-10-08T20:41:05.177390467Z" level=info msg="CreateContainer within sandbox \"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c\"" Oct 8 20:41:05.178189 containerd[1500]: time="2024-10-08T20:41:05.178148292Z" level=info msg="StartContainer for \"a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c\"" Oct 8 20:41:05.212834 systemd-networkd[1392]: vxlan.calico: Link UP Oct 8 20:41:05.212851 systemd-networkd[1392]: vxlan.calico: Gained carrier Oct 8 20:41:05.302865 systemd[1]: Started cri-containerd-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c.scope - libcontainer container a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c. Oct 8 20:41:05.376772 containerd[1500]: time="2024-10-08T20:41:05.376652418Z" level=info msg="StartContainer for \"a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c\" returns successfully" Oct 8 20:41:05.737541 kubelet[2732]: I1008 20:41:05.737337 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d7bf7d4d9-xdzgz" podStartSLOduration=27.69037646 podStartE2EDuration="31.737320353s" podCreationTimestamp="2024-10-08 20:40:34 +0000 UTC" firstStartedPulling="2024-10-08 20:41:01.059686055 +0000 UTC m=+39.646165181" lastFinishedPulling="2024-10-08 20:41:05.106629949 +0000 UTC m=+43.693109074" observedRunningTime="2024-10-08 20:41:05.736696553 +0000 UTC m=+44.323175678" watchObservedRunningTime="2024-10-08 20:41:05.737320353 +0000 UTC m=+44.323799478" Oct 8 20:41:06.545937 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Oct 8 20:41:06.728891 kubelet[2732]: I1008 20:41:06.728616 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:13.277945 kubelet[2732]: I1008 20:41:13.277356 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:14.430211 systemd[1]: run-containerd-runc-k8s.io-42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c-runc.TFeYHq.mount: Deactivated successfully. Oct 8 20:41:19.039526 systemd[1]: Created slice kubepods-besteffort-pod7221ac08_e3b6_4cd3_aee5_0983631fce9b.slice - libcontainer container kubepods-besteffort-pod7221ac08_e3b6_4cd3_aee5_0983631fce9b.slice. Oct 8 20:41:19.048352 systemd[1]: Created slice kubepods-besteffort-poddb6b40db_86ac_438d_bba4_7ff074284f03.slice - libcontainer container kubepods-besteffort-poddb6b40db_86ac_438d_bba4_7ff074284f03.slice. Oct 8 20:41:19.175775 kubelet[2732]: I1008 20:41:19.175701 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn86h\" (UniqueName: \"kubernetes.io/projected/7221ac08-e3b6-4cd3-aee5-0983631fce9b-kube-api-access-gn86h\") pod \"calico-apiserver-7db88955d8-qrwr7\" (UID: \"7221ac08-e3b6-4cd3-aee5-0983631fce9b\") " pod="calico-apiserver/calico-apiserver-7db88955d8-qrwr7" Oct 8 20:41:19.175775 kubelet[2732]: I1008 20:41:19.175778 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db6b40db-86ac-438d-bba4-7ff074284f03-calico-apiserver-certs\") pod \"calico-apiserver-7db88955d8-5w5mk\" (UID: \"db6b40db-86ac-438d-bba4-7ff074284f03\") " pod="calico-apiserver/calico-apiserver-7db88955d8-5w5mk" Oct 8 20:41:19.176531 kubelet[2732]: I1008 20:41:19.175802 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7221ac08-e3b6-4cd3-aee5-0983631fce9b-calico-apiserver-certs\") pod \"calico-apiserver-7db88955d8-qrwr7\" (UID: \"7221ac08-e3b6-4cd3-aee5-0983631fce9b\") " pod="calico-apiserver/calico-apiserver-7db88955d8-qrwr7" Oct 8 20:41:19.176531 kubelet[2732]: I1008 20:41:19.175822 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5288m\" (UniqueName: \"kubernetes.io/projected/db6b40db-86ac-438d-bba4-7ff074284f03-kube-api-access-5288m\") pod \"calico-apiserver-7db88955d8-5w5mk\" (UID: \"db6b40db-86ac-438d-bba4-7ff074284f03\") " pod="calico-apiserver/calico-apiserver-7db88955d8-5w5mk" Oct 8 20:41:19.277022 kubelet[2732]: E1008 20:41:19.276837 2732 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:41:19.277022 kubelet[2732]: E1008 20:41:19.276907 2732 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:41:19.286732 kubelet[2732]: E1008 20:41:19.286649 2732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db6b40db-86ac-438d-bba4-7ff074284f03-calico-apiserver-certs podName:db6b40db-86ac-438d-bba4-7ff074284f03 nodeName:}" failed. No retries permitted until 2024-10-08 20:41:19.776900519 +0000 UTC m=+58.363379673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/db6b40db-86ac-438d-bba4-7ff074284f03-calico-apiserver-certs") pod "calico-apiserver-7db88955d8-5w5mk" (UID: "db6b40db-86ac-438d-bba4-7ff074284f03") : secret "calico-apiserver-certs" not found Oct 8 20:41:19.286904 kubelet[2732]: E1008 20:41:19.286837 2732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7221ac08-e3b6-4cd3-aee5-0983631fce9b-calico-apiserver-certs podName:7221ac08-e3b6-4cd3-aee5-0983631fce9b nodeName:}" failed. No retries permitted until 2024-10-08 20:41:19.786822553 +0000 UTC m=+58.373301677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7221ac08-e3b6-4cd3-aee5-0983631fce9b-calico-apiserver-certs") pod "calico-apiserver-7db88955d8-qrwr7" (UID: "7221ac08-e3b6-4cd3-aee5-0983631fce9b") : secret "calico-apiserver-certs" not found Oct 8 20:41:19.945640 containerd[1500]: time="2024-10-08T20:41:19.945592683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db88955d8-qrwr7,Uid:7221ac08-e3b6-4cd3-aee5-0983631fce9b,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:41:19.955986 containerd[1500]: time="2024-10-08T20:41:19.955941429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db88955d8-5w5mk,Uid:db6b40db-86ac-438d-bba4-7ff074284f03,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:41:20.099972 systemd-networkd[1392]: cali24e4bf86408: Link UP Oct 8 20:41:20.100180 systemd-networkd[1392]: cali24e4bf86408: Gained carrier Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.013 [INFO][4854] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0 calico-apiserver-7db88955d8- calico-apiserver db6b40db-86ac-438d-bba4-7ff074284f03 857 0 2024-10-08 20:41:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db88955d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 calico-apiserver-7db88955d8-5w5mk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali24e4bf86408 [] []}} ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.013 [INFO][4854] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.044 [INFO][4874] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" HandleID="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.062 [INFO][4874] ipam_plugin.go 270: Auto assigning IP ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" HandleID="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000116a10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-a-d0274495d1", "pod":"calico-apiserver-7db88955d8-5w5mk", "timestamp":"2024-10-08 20:41:20.044202584 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.062 [INFO][4874] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.062 [INFO][4874] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.062 [INFO][4874] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.065 [INFO][4874] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.070 [INFO][4874] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.074 [INFO][4874] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.076 [INFO][4874] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.078 [INFO][4874] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.078 [INFO][4874] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.079 [INFO][4874] ipam.go 1685: Creating new handle: k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4 Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.085 [INFO][4874] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4874] ipam.go 1216: Successfully claimed IPs: [192.168.11.133/26] block=192.168.11.128/26 handle="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4874] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.133/26] handle="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4874] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:20.110017 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4874] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.133/26] IPv6=[] ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" HandleID="k8s-pod-network.8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.095 [INFO][4854] k8s.go 386: Populated endpoint ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0", GenerateName:"calico-apiserver-7db88955d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"db6b40db-86ac-438d-bba4-7ff074284f03", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db88955d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"calico-apiserver-7db88955d8-5w5mk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24e4bf86408", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.095 [INFO][4854] k8s.go 387: Calico CNI using IPs: [192.168.11.133/32] ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.095 [INFO][4854] dataplane_linux.go 68: Setting the host side veth name to cali24e4bf86408 ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.097 [INFO][4854] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.097 [INFO][4854] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0", GenerateName:"calico-apiserver-7db88955d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"db6b40db-86ac-438d-bba4-7ff074284f03", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db88955d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4", Pod:"calico-apiserver-7db88955d8-5w5mk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24e4bf86408", MAC:"12:46:f9:e4:e7:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:20.112325 containerd[1500]: 2024-10-08 20:41:20.106 [INFO][4854] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-5w5mk" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--5w5mk-eth0" Oct 8 20:41:20.153550 containerd[1500]: time="2024-10-08T20:41:20.153331865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:41:20.153550 containerd[1500]: time="2024-10-08T20:41:20.153384505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:41:20.153550 containerd[1500]: time="2024-10-08T20:41:20.153395707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:20.153986 containerd[1500]: time="2024-10-08T20:41:20.153475187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:20.171921 systemd[1]: Started cri-containerd-8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4.scope - libcontainer container 8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4. Oct 8 20:41:20.213066 systemd-networkd[1392]: caliedd8cff07a1: Link UP Oct 8 20:41:20.213984 systemd-networkd[1392]: caliedd8cff07a1: Gained carrier Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.017 [INFO][4850] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0 calico-apiserver-7db88955d8- calico-apiserver 7221ac08-e3b6-4cd3-aee5-0983631fce9b 856 0 2024-10-08 20:41:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7db88955d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-a-d0274495d1 calico-apiserver-7db88955d8-qrwr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedd8cff07a1 [] []}} ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.017 [INFO][4850] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.054 [INFO][4878] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" HandleID="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.067 [INFO][4878] ipam_plugin.go 270: Auto assigning IP ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" HandleID="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f2730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-a-d0274495d1", "pod":"calico-apiserver-7db88955d8-qrwr7", "timestamp":"2024-10-08 20:41:20.05401662 +0000 UTC"}, Hostname:"ci-4081-1-0-a-d0274495d1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.067 [INFO][4878] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4878] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.092 [INFO][4878] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-a-d0274495d1' Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.166 [INFO][4878] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.174 [INFO][4878] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.179 [INFO][4878] ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.181 [INFO][4878] ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.185 [INFO][4878] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.185 [INFO][4878] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.187 [INFO][4878] ipam.go 1685: Creating new handle: k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618 Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.191 [INFO][4878] ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.204 [INFO][4878] ipam.go 1216: Successfully claimed IPs: [192.168.11.134/26] block=192.168.11.128/26 handle="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.204 [INFO][4878] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.134/26] handle="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" host="ci-4081-1-0-a-d0274495d1" Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.204 [INFO][4878] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:20.231844 containerd[1500]: 2024-10-08 20:41:20.204 [INFO][4878] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.11.134/26] IPv6=[] ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" HandleID="k8s-pod-network.b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.208 [INFO][4850] k8s.go 386: Populated endpoint ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0", GenerateName:"calico-apiserver-7db88955d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7221ac08-e3b6-4cd3-aee5-0983631fce9b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db88955d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"", Pod:"calico-apiserver-7db88955d8-qrwr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd8cff07a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.208 [INFO][4850] k8s.go 387: Calico CNI using IPs: [192.168.11.134/32] ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.208 [INFO][4850] dataplane_linux.go 68: Setting the host side veth name to caliedd8cff07a1 ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.213 [INFO][4850] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.215 [INFO][4850] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0", GenerateName:"calico-apiserver-7db88955d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"7221ac08-e3b6-4cd3-aee5-0983631fce9b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 41, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7db88955d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618", Pod:"calico-apiserver-7db88955d8-qrwr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedd8cff07a1", MAC:"fe:f7:fe:db:47:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:20.234598 containerd[1500]: 2024-10-08 20:41:20.226 [INFO][4850] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618" Namespace="calico-apiserver" Pod="calico-apiserver-7db88955d8-qrwr7" WorkloadEndpoint="ci--4081--1--0--a--d0274495d1-k8s-calico--apiserver--7db88955d8--qrwr7-eth0" Oct 8 20:41:20.254839 containerd[1500]: time="2024-10-08T20:41:20.254764958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db88955d8-5w5mk,Uid:db6b40db-86ac-438d-bba4-7ff074284f03,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4\"" Oct 8 20:41:20.256875 containerd[1500]: time="2024-10-08T20:41:20.256409455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:41:20.273458 containerd[1500]: time="2024-10-08T20:41:20.273076225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:41:20.273458 containerd[1500]: time="2024-10-08T20:41:20.273154294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:41:20.273458 containerd[1500]: time="2024-10-08T20:41:20.273178931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:20.273458 containerd[1500]: time="2024-10-08T20:41:20.273280503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:41:20.312970 systemd[1]: Started cri-containerd-b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618.scope - libcontainer container b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618. Oct 8 20:41:20.350244 containerd[1500]: time="2024-10-08T20:41:20.350198029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7db88955d8-qrwr7,Uid:7221ac08-e3b6-4cd3-aee5-0983631fce9b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618\"" Oct 8 20:41:21.494793 containerd[1500]: time="2024-10-08T20:41:21.494397153Z" level=info msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.533 [WARNING][5009] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25d59088-7c89-4335-a16e-1df4714e04f3", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55", Pod:"csi-node-driver-296dq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0a84316c805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.533 [INFO][5009] k8s.go 608: Cleaning up netns ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.533 [INFO][5009] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" iface="eth0" netns="" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.533 [INFO][5009] k8s.go 615: Releasing IP address(es) ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.533 [INFO][5009] utils.go 188: Calico CNI releasing IP address ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.563 [INFO][5017] ipam_plugin.go 417: Releasing address using handleID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.564 [INFO][5017] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.564 [INFO][5017] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.569 [WARNING][5017] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.569 [INFO][5017] ipam_plugin.go 445: Releasing address using workloadID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.570 [INFO][5017] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.574051 containerd[1500]: 2024-10-08 20:41:21.572 [INFO][5009] k8s.go 621: Teardown processing complete. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.574961 containerd[1500]: time="2024-10-08T20:41:21.574096134Z" level=info msg="TearDown network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" successfully" Oct 8 20:41:21.574961 containerd[1500]: time="2024-10-08T20:41:21.574118436Z" level=info msg="StopPodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" returns successfully" Oct 8 20:41:21.574961 containerd[1500]: time="2024-10-08T20:41:21.574629318Z" level=info msg="RemovePodSandbox for \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" Oct 8 20:41:21.576480 containerd[1500]: time="2024-10-08T20:41:21.576452875Z" level=info msg="Forcibly stopping sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\"" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.615 [WARNING][5036] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25d59088-7c89-4335-a16e-1df4714e04f3", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"749f44275e7f9aed469692975fd9cf5e83dba9c04f5186c35c69f129029f7b55", Pod:"csi-node-driver-296dq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0a84316c805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.615 [INFO][5036] k8s.go 608: Cleaning up netns ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.615 [INFO][5036] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" iface="eth0" netns="" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.615 [INFO][5036] k8s.go 615: Releasing IP address(es) ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.616 [INFO][5036] utils.go 188: Calico CNI releasing IP address ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.636 [INFO][5042] ipam_plugin.go 417: Releasing address using handleID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.636 [INFO][5042] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.636 [INFO][5042] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.641 [WARNING][5042] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.641 [INFO][5042] ipam_plugin.go 445: Releasing address using workloadID ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" HandleID="k8s-pod-network.485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Workload="ci--4081--1--0--a--d0274495d1-k8s-csi--node--driver--296dq-eth0" Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.643 [INFO][5042] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.648275 containerd[1500]: 2024-10-08 20:41:21.645 [INFO][5036] k8s.go 621: Teardown processing complete. ContainerID="485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713" Oct 8 20:41:21.648786 containerd[1500]: time="2024-10-08T20:41:21.648306958Z" level=info msg="TearDown network for sandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" successfully" Oct 8 20:41:21.650626 systemd-networkd[1392]: cali24e4bf86408: Gained IPv6LL Oct 8 20:41:21.660778 containerd[1500]: time="2024-10-08T20:41:21.660698072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:41:21.660881 containerd[1500]: time="2024-10-08T20:41:21.660805075Z" level=info msg="RemovePodSandbox \"485c133705809d00e75345c27eed922511698eec28f2acdb85bcf1526ca20713\" returns successfully" Oct 8 20:41:21.662117 containerd[1500]: time="2024-10-08T20:41:21.661867646Z" level=info msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.696 [WARNING][5060] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0", GenerateName:"calico-kube-controllers-5d7bf7d4d9-", Namespace:"calico-system", SelfLink:"", UID:"93c268dd-337d-4967-a451-4fde3c6432ae", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7bf7d4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d", Pod:"calico-kube-controllers-5d7bf7d4d9-xdzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72e14e4c1de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.697 [INFO][5060] k8s.go 608: Cleaning up netns ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.697 [INFO][5060] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" iface="eth0" netns="" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.697 [INFO][5060] k8s.go 615: Releasing IP address(es) ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.697 [INFO][5060] utils.go 188: Calico CNI releasing IP address ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.717 [INFO][5066] ipam_plugin.go 417: Releasing address using handleID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.717 [INFO][5066] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.717 [INFO][5066] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.723 [WARNING][5066] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.723 [INFO][5066] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.725 [INFO][5066] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.729939 containerd[1500]: 2024-10-08 20:41:21.727 [INFO][5060] k8s.go 621: Teardown processing complete. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.731347 containerd[1500]: time="2024-10-08T20:41:21.730116953Z" level=info msg="TearDown network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" successfully" Oct 8 20:41:21.731347 containerd[1500]: time="2024-10-08T20:41:21.730327263Z" level=info msg="StopPodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" returns successfully" Oct 8 20:41:21.731347 containerd[1500]: time="2024-10-08T20:41:21.730949676Z" level=info msg="RemovePodSandbox for \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" Oct 8 20:41:21.731347 containerd[1500]: time="2024-10-08T20:41:21.730979804Z" level=info msg="Forcibly stopping sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\"" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.775 [WARNING][5084] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0", GenerateName:"calico-kube-controllers-5d7bf7d4d9-", Namespace:"calico-system", SelfLink:"", UID:"93c268dd-337d-4967-a451-4fde3c6432ae", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d7bf7d4d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"172ac70bac1c997dd621ff41946bf1e91d446f15c35d9c3a240a15c5bccfc81d", Pod:"calico-kube-controllers-5d7bf7d4d9-xdzgz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72e14e4c1de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.775 [INFO][5084] k8s.go 608: Cleaning up netns ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.775 [INFO][5084] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" iface="eth0" netns="" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.775 [INFO][5084] k8s.go 615: Releasing IP address(es) ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.775 [INFO][5084] utils.go 188: Calico CNI releasing IP address ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.796 [INFO][5091] ipam_plugin.go 417: Releasing address using handleID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.797 [INFO][5091] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.797 [INFO][5091] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.803 [WARNING][5091] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.803 [INFO][5091] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" HandleID="k8s-pod-network.8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Workload="ci--4081--1--0--a--d0274495d1-k8s-calico--kube--controllers--5d7bf7d4d9--xdzgz-eth0" Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.804 [INFO][5091] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.808606 containerd[1500]: 2024-10-08 20:41:21.806 [INFO][5084] k8s.go 621: Teardown processing complete. ContainerID="8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d" Oct 8 20:41:21.808606 containerd[1500]: time="2024-10-08T20:41:21.808581075Z" level=info msg="TearDown network for sandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" successfully" Oct 8 20:41:21.813557 containerd[1500]: time="2024-10-08T20:41:21.813516442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:41:21.813623 containerd[1500]: time="2024-10-08T20:41:21.813564292Z" level=info msg="RemovePodSandbox \"8dc4f6ea1f752c836289ecf1aa2974d4fb639cf29798712ab954319aa362954d\" returns successfully" Oct 8 20:41:21.814140 containerd[1500]: time="2024-10-08T20:41:21.814100291Z" level=info msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.849 [WARNING][5109] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"072c8458-9a61-4cf6-86b8-63e198a00610", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9", Pod:"coredns-6f6b679f8f-fphvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcdde45b865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.850 [INFO][5109] k8s.go 608: Cleaning up netns ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.850 [INFO][5109] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" iface="eth0" netns="" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.850 [INFO][5109] k8s.go 615: Releasing IP address(es) ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.850 [INFO][5109] utils.go 188: Calico CNI releasing IP address ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.867 [INFO][5115] ipam_plugin.go 417: Releasing address using handleID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.868 [INFO][5115] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.868 [INFO][5115] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.872 [WARNING][5115] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.872 [INFO][5115] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.874 [INFO][5115] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.878227 containerd[1500]: 2024-10-08 20:41:21.876 [INFO][5109] k8s.go 621: Teardown processing complete. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.879207 containerd[1500]: time="2024-10-08T20:41:21.878587595Z" level=info msg="TearDown network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" successfully" Oct 8 20:41:21.879207 containerd[1500]: time="2024-10-08T20:41:21.878611901Z" level=info msg="StopPodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" returns successfully" Oct 8 20:41:21.879207 containerd[1500]: time="2024-10-08T20:41:21.879144323Z" level=info msg="RemovePodSandbox for \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" Oct 8 20:41:21.879207 containerd[1500]: time="2024-10-08T20:41:21.879174732Z" level=info msg="Forcibly stopping sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\"" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.918 [WARNING][5133] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"072c8458-9a61-4cf6-86b8-63e198a00610", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"eaa7a6c163a44214495674cb9bd965169bdf6e08d629bcc797ede3267cbd94b9", Pod:"coredns-6f6b679f8f-fphvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcdde45b865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.918 [INFO][5133] k8s.go 608: Cleaning up netns ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.918 [INFO][5133] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" iface="eth0" netns="" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.918 [INFO][5133] k8s.go 615: Releasing IP address(es) ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.918 [INFO][5133] utils.go 188: Calico CNI releasing IP address ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.939 [INFO][5139] ipam_plugin.go 417: Releasing address using handleID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.939 [INFO][5139] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.939 [INFO][5139] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.944 [WARNING][5139] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.944 [INFO][5139] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" HandleID="k8s-pod-network.9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--fphvf-eth0" Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.945 [INFO][5139] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:21.949440 containerd[1500]: 2024-10-08 20:41:21.947 [INFO][5133] k8s.go 621: Teardown processing complete. ContainerID="9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5" Oct 8 20:41:21.949994 containerd[1500]: time="2024-10-08T20:41:21.949459369Z" level=info msg="TearDown network for sandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" successfully" Oct 8 20:41:21.960954 containerd[1500]: time="2024-10-08T20:41:21.960876792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:41:21.960954 containerd[1500]: time="2024-10-08T20:41:21.960942797Z" level=info msg="RemovePodSandbox \"9b3e7dd99ec16d66e3865af39fde5c5604290610b861f68e3f23dc51996a76f5\" returns successfully" Oct 8 20:41:21.961874 containerd[1500]: time="2024-10-08T20:41:21.961642587Z" level=info msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:21.992 [WARNING][5158] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aec703d5-39d0-4c7c-8437-95f773d85d2f", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039", Pod:"coredns-6f6b679f8f-x2vq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c7eecc3f5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:21.992 [INFO][5158] k8s.go 608: Cleaning up netns ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:21.993 [INFO][5158] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" iface="eth0" netns="" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:21.993 [INFO][5158] k8s.go 615: Releasing IP address(es) ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:21.993 [INFO][5158] utils.go 188: Calico CNI releasing IP address ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.013 [INFO][5165] ipam_plugin.go 417: Releasing address using handleID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.013 [INFO][5165] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.013 [INFO][5165] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.018 [WARNING][5165] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.018 [INFO][5165] ipam_plugin.go 445: Releasing address using workloadID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.020 [INFO][5165] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:22.024954 containerd[1500]: 2024-10-08 20:41:22.022 [INFO][5158] k8s.go 621: Teardown processing complete. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.024954 containerd[1500]: time="2024-10-08T20:41:22.024913030Z" level=info msg="TearDown network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" successfully" Oct 8 20:41:22.024954 containerd[1500]: time="2024-10-08T20:41:22.024940471Z" level=info msg="StopPodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" returns successfully" Oct 8 20:41:22.025947 containerd[1500]: time="2024-10-08T20:41:22.025904235Z" level=info msg="RemovePodSandbox for \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" Oct 8 20:41:22.025947 containerd[1500]: time="2024-10-08T20:41:22.025941485Z" level=info msg="Forcibly stopping sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\"" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.058 [WARNING][5183] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aec703d5-39d0-4c7c-8437-95f773d85d2f", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 40, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-a-d0274495d1", ContainerID:"5aa9f942f751908daebe283b25e47e0f39e59afbbda476bed5f12ea86c059039", Pod:"coredns-6f6b679f8f-x2vq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c7eecc3f5b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.058 [INFO][5183] k8s.go 608: Cleaning up netns ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.058 [INFO][5183] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" iface="eth0" netns="" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.058 [INFO][5183] k8s.go 615: Releasing IP address(es) ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.058 [INFO][5183] utils.go 188: Calico CNI releasing IP address ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.078 [INFO][5189] ipam_plugin.go 417: Releasing address using handleID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.078 [INFO][5189] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.078 [INFO][5189] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.084 [WARNING][5189] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.084 [INFO][5189] ipam_plugin.go 445: Releasing address using workloadID ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" HandleID="k8s-pod-network.da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Workload="ci--4081--1--0--a--d0274495d1-k8s-coredns--6f6b679f8f--x2vq2-eth0" Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.085 [INFO][5189] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:41:22.091632 containerd[1500]: 2024-10-08 20:41:22.088 [INFO][5183] k8s.go 621: Teardown processing complete. ContainerID="da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3" Oct 8 20:41:22.092222 containerd[1500]: time="2024-10-08T20:41:22.091684768Z" level=info msg="TearDown network for sandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" successfully" Oct 8 20:41:22.098223 containerd[1500]: time="2024-10-08T20:41:22.098190988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:41:22.098280 containerd[1500]: time="2024-10-08T20:41:22.098243447Z" level=info msg="RemovePodSandbox \"da84760853f6f0fbe2ab4e2d21c430e11661da8481fbc08613e82eaac1c484b3\" returns successfully" Oct 8 20:41:22.161877 systemd-networkd[1392]: caliedd8cff07a1: Gained IPv6LL Oct 8 20:41:26.857396 containerd[1500]: time="2024-10-08T20:41:26.857326257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:26.858322 containerd[1500]: time="2024-10-08T20:41:26.858274268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:41:26.859400 containerd[1500]: time="2024-10-08T20:41:26.859359220Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:26.861157 containerd[1500]: time="2024-10-08T20:41:26.861137489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:26.861804 containerd[1500]: time="2024-10-08T20:41:26.861636167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 6.605197155s" Oct 8 20:41:26.861804 containerd[1500]: time="2024-10-08T20:41:26.861674109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:41:26.863556 containerd[1500]: time="2024-10-08T20:41:26.863190049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:41:26.864116 containerd[1500]: time="2024-10-08T20:41:26.864005609Z" level=info msg="CreateContainer within sandbox \"8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:41:26.880582 containerd[1500]: time="2024-10-08T20:41:26.880543857Z" level=info msg="CreateContainer within sandbox \"8c18cf6a6bf8a918c72decfc557a4361f26b440456ee703b9e621af50d5a83c4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e5a366e791f58710e758c6a357bdef9fbe4bfd3a3bb03ed501e9c42589a05f4\"" Oct 8 20:41:26.881764 containerd[1500]: time="2024-10-08T20:41:26.881733657Z" level=info msg="StartContainer for \"2e5a366e791f58710e758c6a357bdef9fbe4bfd3a3bb03ed501e9c42589a05f4\"" Oct 8 20:41:26.915845 systemd[1]: Started cri-containerd-2e5a366e791f58710e758c6a357bdef9fbe4bfd3a3bb03ed501e9c42589a05f4.scope - libcontainer container 2e5a366e791f58710e758c6a357bdef9fbe4bfd3a3bb03ed501e9c42589a05f4. Oct 8 20:41:26.958040 containerd[1500]: time="2024-10-08T20:41:26.957992440Z" level=info msg="StartContainer for \"2e5a366e791f58710e758c6a357bdef9fbe4bfd3a3bb03ed501e9c42589a05f4\" returns successfully" Oct 8 20:41:27.287631 containerd[1500]: time="2024-10-08T20:41:27.286931482Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:41:27.288604 containerd[1500]: time="2024-10-08T20:41:27.288421312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:41:27.290506 containerd[1500]: time="2024-10-08T20:41:27.290467280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 427.211074ms" Oct 8 20:41:27.290707 containerd[1500]: time="2024-10-08T20:41:27.290595353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:41:27.292621 containerd[1500]: time="2024-10-08T20:41:27.292483561Z" level=info msg="CreateContainer within sandbox \"b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:41:27.309309 containerd[1500]: time="2024-10-08T20:41:27.309265106Z" level=info msg="CreateContainer within sandbox \"b7822cdf01c77240289c8e4252062a7751784ecf4710629ddef438344ea8a618\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"db2b1f910a12ac1a494108b5119e19cd7975d2ca8620405241bbf48a098ee34d\"" Oct 8 20:41:27.310418 containerd[1500]: time="2024-10-08T20:41:27.310246430Z" level=info msg="StartContainer for \"db2b1f910a12ac1a494108b5119e19cd7975d2ca8620405241bbf48a098ee34d\"" Oct 8 20:41:27.351865 systemd[1]: Started cri-containerd-db2b1f910a12ac1a494108b5119e19cd7975d2ca8620405241bbf48a098ee34d.scope - libcontainer container db2b1f910a12ac1a494108b5119e19cd7975d2ca8620405241bbf48a098ee34d. Oct 8 20:41:27.417980 containerd[1500]: time="2024-10-08T20:41:27.417935281Z" level=info msg="StartContainer for \"db2b1f910a12ac1a494108b5119e19cd7975d2ca8620405241bbf48a098ee34d\" returns successfully" Oct 8 20:41:27.811327 kubelet[2732]: I1008 20:41:27.810319 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db88955d8-5w5mk" podStartSLOduration=3.203661367 podStartE2EDuration="9.810302537s" podCreationTimestamp="2024-10-08 20:41:18 +0000 UTC" firstStartedPulling="2024-10-08 20:41:20.255918252 +0000 UTC m=+58.842397376" lastFinishedPulling="2024-10-08 20:41:26.862559421 +0000 UTC m=+65.449038546" observedRunningTime="2024-10-08 20:41:27.798645799 +0000 UTC m=+66.385124964" watchObservedRunningTime="2024-10-08 20:41:27.810302537 +0000 UTC m=+66.396781663" Oct 8 20:41:28.790546 kubelet[2732]: I1008 20:41:28.790502 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:28.791162 kubelet[2732]: I1008 20:41:28.790503 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:33.030003 systemd[1]: run-containerd-runc-k8s.io-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c-runc.448qH6.mount: Deactivated successfully. Oct 8 20:41:42.647486 kubelet[2732]: I1008 20:41:42.647108 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:41:42.703841 kubelet[2732]: I1008 20:41:42.703195 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7db88955d8-qrwr7" podStartSLOduration=17.763770047 podStartE2EDuration="24.703174572s" podCreationTimestamp="2024-10-08 20:41:18 +0000 UTC" firstStartedPulling="2024-10-08 20:41:20.351799164 +0000 UTC m=+58.938278289" lastFinishedPulling="2024-10-08 20:41:27.291203689 +0000 UTC m=+65.877682814" observedRunningTime="2024-10-08 20:41:27.812547182 +0000 UTC m=+66.399026307" watchObservedRunningTime="2024-10-08 20:41:42.703174572 +0000 UTC m=+81.289653697" Oct 8 20:41:43.336424 systemd[1]: run-containerd-runc-k8s.io-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c-runc.GeysHe.mount: Deactivated successfully. Oct 8 20:41:55.146437 update_engine[1475]: I20241008 20:41:55.146335 1475 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 20:41:55.146437 update_engine[1475]: I20241008 20:41:55.146435 1475 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 20:41:55.150781 update_engine[1475]: I20241008 20:41:55.150725 1475 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 20:41:55.152001 update_engine[1475]: I20241008 20:41:55.151973 1475 omaha_request_params.cc:62] Current group set to beta Oct 8 20:41:55.152283 update_engine[1475]: I20241008 20:41:55.152192 1475 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 20:41:55.152283 update_engine[1475]: I20241008 20:41:55.152212 1475 update_attempter.cc:643] Scheduling an action processor start. Oct 8 20:41:55.152283 update_engine[1475]: I20241008 20:41:55.152233 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:41:55.152283 update_engine[1475]: I20241008 20:41:55.152271 1475 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 20:41:55.156313 update_engine[1475]: I20241008 20:41:55.152326 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:41:55.156313 update_engine[1475]: I20241008 20:41:55.152335 1475 omaha_request_action.cc:272] Request: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: Oct 8 20:41:55.156313 update_engine[1475]: I20241008 20:41:55.152343 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:41:55.169521 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 20:41:55.175541 update_engine[1475]: I20241008 20:41:55.175098 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:41:55.176423 update_engine[1475]: I20241008 20:41:55.175438 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:41:55.177369 update_engine[1475]: E20241008 20:41:55.177345 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:41:55.177510 update_engine[1475]: I20241008 20:41:55.177491 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 20:42:02.430641 kubelet[2732]: I1008 20:42:02.430397 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:42:05.101603 update_engine[1475]: I20241008 20:42:05.101527 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:42:05.102115 update_engine[1475]: I20241008 20:42:05.101811 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:42:05.102115 update_engine[1475]: I20241008 20:42:05.102030 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:42:05.102628 update_engine[1475]: E20241008 20:42:05.102591 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:42:05.102670 update_engine[1475]: I20241008 20:42:05.102648 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 20:42:15.101680 update_engine[1475]: I20241008 20:42:15.101593 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:42:15.102940 update_engine[1475]: I20241008 20:42:15.102094 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:42:15.102940 update_engine[1475]: I20241008 20:42:15.102491 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:42:15.103428 update_engine[1475]: E20241008 20:42:15.103389 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:42:15.103499 update_engine[1475]: I20241008 20:42:15.103446 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 20:42:25.101475 update_engine[1475]: I20241008 20:42:25.101339 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:42:25.101929 update_engine[1475]: I20241008 20:42:25.101653 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:42:25.101978 update_engine[1475]: I20241008 20:42:25.101909 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:42:25.102601 update_engine[1475]: E20241008 20:42:25.102557 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:42:25.102677 update_engine[1475]: I20241008 20:42:25.102609 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:42:25.102677 update_engine[1475]: I20241008 20:42:25.102622 1475 omaha_request_action.cc:617] Omaha request response: Oct 8 20:42:25.102908 update_engine[1475]: E20241008 20:42:25.102764 1475 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102790 1475 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102798 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102806 1475 update_attempter.cc:306] Processing Done. Oct 8 20:42:25.102908 update_engine[1475]: E20241008 20:42:25.102823 1475 update_attempter.cc:619] Update failed. Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102838 1475 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102845 1475 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 20:42:25.102908 update_engine[1475]: I20241008 20:42:25.102854 1475 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.102931 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.102953 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.102960 1475 omaha_request_action.cc:272] Request: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.102968 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.103147 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.103343 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:42:25.105134 update_engine[1475]: E20241008 20:42:25.103996 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104036 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104045 1475 omaha_request_action.cc:617] Omaha request response: Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104054 1475 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104060 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104068 1475 update_attempter.cc:306] Processing Done. Oct 8 20:42:25.105134 update_engine[1475]: I20241008 20:42:25.104075 1475 update_attempter.cc:310] Error event sent. Oct 8 20:42:25.105853 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 20:42:25.105853 locksmithd[1515]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 20:42:25.106159 update_engine[1475]: I20241008 20:42:25.104085 1475 update_check_scheduler.cc:74] Next update check in 43m2s Oct 8 20:42:44.438007 systemd[1]: run-containerd-runc-k8s.io-42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c-runc.A9XbLs.mount: Deactivated successfully. Oct 8 20:43:13.299617 systemd[1]: run-containerd-runc-k8s.io-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c-runc.C7XYfc.mount: Deactivated successfully. Oct 8 20:45:12.718993 systemd[1]: Started sshd@7-91.107.220.127:22-147.75.109.163:34718.service - OpenSSH per-connection server daemon (147.75.109.163:34718). Oct 8 20:45:13.695299 sshd[5861]: Accepted publickey for core from 147.75.109.163 port 34718 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:13.697789 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:13.702209 systemd-logind[1474]: New session 8 of user core. Oct 8 20:45:13.705826 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:45:14.807278 sshd[5861]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:14.812590 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:45:14.813325 systemd[1]: sshd@7-91.107.220.127:22-147.75.109.163:34718.service: Deactivated successfully. Oct 8 20:45:14.817666 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:45:14.819468 systemd-logind[1474]: Removed session 8. Oct 8 20:45:19.984971 systemd[1]: Started sshd@8-91.107.220.127:22-147.75.109.163:39928.service - OpenSSH per-connection server daemon (147.75.109.163:39928). Oct 8 20:45:20.972741 sshd[5926]: Accepted publickey for core from 147.75.109.163 port 39928 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:20.974596 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:20.979902 systemd-logind[1474]: New session 9 of user core. Oct 8 20:45:20.985856 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:45:21.731035 sshd[5926]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:21.734049 systemd[1]: sshd@8-91.107.220.127:22-147.75.109.163:39928.service: Deactivated successfully. Oct 8 20:45:21.736430 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:45:21.738331 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:45:21.739319 systemd-logind[1474]: Removed session 9. Oct 8 20:45:26.902956 systemd[1]: Started sshd@9-91.107.220.127:22-147.75.109.163:39944.service - OpenSSH per-connection server daemon (147.75.109.163:39944). Oct 8 20:45:27.876326 sshd[5942]: Accepted publickey for core from 147.75.109.163 port 39944 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:27.878142 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:27.883156 systemd-logind[1474]: New session 10 of user core. Oct 8 20:45:27.885862 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:45:28.623941 sshd[5942]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:28.627249 systemd[1]: sshd@9-91.107.220.127:22-147.75.109.163:39944.service: Deactivated successfully. Oct 8 20:45:28.629616 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:45:28.631565 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:45:28.633984 systemd-logind[1474]: Removed session 10. Oct 8 20:45:28.786965 systemd[1]: Started sshd@10-91.107.220.127:22-147.75.109.163:36402.service - OpenSSH per-connection server daemon (147.75.109.163:36402). Oct 8 20:45:29.745593 sshd[5961]: Accepted publickey for core from 147.75.109.163 port 36402 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:29.747428 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:29.753187 systemd-logind[1474]: New session 11 of user core. Oct 8 20:45:29.758852 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:45:30.529308 sshd[5961]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:30.535340 systemd[1]: sshd@10-91.107.220.127:22-147.75.109.163:36402.service: Deactivated successfully. Oct 8 20:45:30.538753 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:45:30.540803 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:45:30.542774 systemd-logind[1474]: Removed session 11. Oct 8 20:45:30.705441 systemd[1]: Started sshd@11-91.107.220.127:22-147.75.109.163:36410.service - OpenSSH per-connection server daemon (147.75.109.163:36410). Oct 8 20:45:31.708620 sshd[5973]: Accepted publickey for core from 147.75.109.163 port 36410 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:31.710834 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:31.715921 systemd-logind[1474]: New session 12 of user core. Oct 8 20:45:31.722949 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:45:32.469462 sshd[5973]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:32.473893 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:45:32.474794 systemd[1]: sshd@11-91.107.220.127:22-147.75.109.163:36410.service: Deactivated successfully. Oct 8 20:45:32.478619 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:45:32.480012 systemd-logind[1474]: Removed session 12. Oct 8 20:45:33.026120 systemd[1]: run-containerd-runc-k8s.io-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c-runc.lc20Td.mount: Deactivated successfully. Oct 8 20:45:37.640668 systemd[1]: Started sshd@12-91.107.220.127:22-147.75.109.163:57094.service - OpenSSH per-connection server daemon (147.75.109.163:57094). Oct 8 20:45:38.648923 sshd[6010]: Accepted publickey for core from 147.75.109.163 port 57094 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:38.651007 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:38.656873 systemd-logind[1474]: New session 13 of user core. Oct 8 20:45:38.662925 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:45:39.394114 sshd[6010]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:39.397272 systemd[1]: sshd@12-91.107.220.127:22-147.75.109.163:57094.service: Deactivated successfully. Oct 8 20:45:39.399765 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:45:39.401409 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:45:39.402990 systemd-logind[1474]: Removed session 13. Oct 8 20:45:44.566037 systemd[1]: Started sshd@13-91.107.220.127:22-147.75.109.163:57102.service - OpenSSH per-connection server daemon (147.75.109.163:57102). Oct 8 20:45:45.559508 sshd[6069]: Accepted publickey for core from 147.75.109.163 port 57102 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:45.562804 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:45.568211 systemd-logind[1474]: New session 14 of user core. Oct 8 20:45:45.575862 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:45:46.306060 sshd[6069]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:46.311470 systemd[1]: sshd@13-91.107.220.127:22-147.75.109.163:57102.service: Deactivated successfully. Oct 8 20:45:46.314213 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:45:46.314998 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:45:46.316257 systemd-logind[1474]: Removed session 14. Oct 8 20:45:51.472962 systemd[1]: Started sshd@14-91.107.220.127:22-147.75.109.163:37750.service - OpenSSH per-connection server daemon (147.75.109.163:37750). Oct 8 20:45:52.432456 sshd[6088]: Accepted publickey for core from 147.75.109.163 port 37750 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:52.434352 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:52.438970 systemd-logind[1474]: New session 15 of user core. Oct 8 20:45:52.446857 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:45:53.167092 sshd[6088]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:53.173273 systemd[1]: sshd@14-91.107.220.127:22-147.75.109.163:37750.service: Deactivated successfully. Oct 8 20:45:53.175616 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:45:53.176731 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:45:53.178063 systemd-logind[1474]: Removed session 15. Oct 8 20:45:53.334884 systemd[1]: Started sshd@15-91.107.220.127:22-147.75.109.163:37766.service - OpenSSH per-connection server daemon (147.75.109.163:37766). Oct 8 20:45:54.327207 sshd[6102]: Accepted publickey for core from 147.75.109.163 port 37766 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:54.329133 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:54.335173 systemd-logind[1474]: New session 16 of user core. Oct 8 20:45:54.340094 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:45:55.241926 sshd[6102]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:55.247529 systemd[1]: sshd@15-91.107.220.127:22-147.75.109.163:37766.service: Deactivated successfully. Oct 8 20:45:55.250476 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:45:55.254338 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:45:55.255959 systemd-logind[1474]: Removed session 16. Oct 8 20:45:55.415095 systemd[1]: Started sshd@16-91.107.220.127:22-147.75.109.163:37778.service - OpenSSH per-connection server daemon (147.75.109.163:37778). Oct 8 20:45:56.401072 sshd[6125]: Accepted publickey for core from 147.75.109.163 port 37778 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:56.403560 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:56.408676 systemd-logind[1474]: New session 17 of user core. Oct 8 20:45:56.414900 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:45:58.785849 sshd[6125]: pam_unix(sshd:session): session closed for user core Oct 8 20:45:58.794700 systemd[1]: sshd@16-91.107.220.127:22-147.75.109.163:37778.service: Deactivated successfully. Oct 8 20:45:58.796680 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:45:58.797331 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:45:58.798265 systemd-logind[1474]: Removed session 17. Oct 8 20:45:58.954459 systemd[1]: Started sshd@17-91.107.220.127:22-147.75.109.163:40828.service - OpenSSH per-connection server daemon (147.75.109.163:40828). Oct 8 20:45:59.932741 sshd[6144]: Accepted publickey for core from 147.75.109.163 port 40828 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:45:59.933481 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:45:59.938784 systemd-logind[1474]: New session 18 of user core. Oct 8 20:45:59.943893 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:46:01.068374 sshd[6144]: pam_unix(sshd:session): session closed for user core Oct 8 20:46:01.073185 systemd[1]: sshd@17-91.107.220.127:22-147.75.109.163:40828.service: Deactivated successfully. Oct 8 20:46:01.075272 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:46:01.076168 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:46:01.077678 systemd-logind[1474]: Removed session 18. Oct 8 20:46:01.232648 systemd[1]: Started sshd@18-91.107.220.127:22-147.75.109.163:40834.service - OpenSSH per-connection server daemon (147.75.109.163:40834). Oct 8 20:46:02.209781 sshd[6163]: Accepted publickey for core from 147.75.109.163 port 40834 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:46:02.212433 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:46:02.217469 systemd-logind[1474]: New session 19 of user core. Oct 8 20:46:02.221834 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:46:03.013932 sshd[6163]: pam_unix(sshd:session): session closed for user core Oct 8 20:46:03.017691 systemd[1]: sshd@18-91.107.220.127:22-147.75.109.163:40834.service: Deactivated successfully. Oct 8 20:46:03.020315 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:46:03.022425 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:46:03.024148 systemd-logind[1474]: Removed session 19. Oct 8 20:46:08.190128 systemd[1]: Started sshd@19-91.107.220.127:22-147.75.109.163:56546.service - OpenSSH per-connection server daemon (147.75.109.163:56546). Oct 8 20:46:09.158861 sshd[6178]: Accepted publickey for core from 147.75.109.163 port 56546 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:46:09.160667 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:46:09.166410 systemd-logind[1474]: New session 20 of user core. Oct 8 20:46:09.171182 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:46:09.887094 sshd[6178]: pam_unix(sshd:session): session closed for user core Oct 8 20:46:09.890660 systemd[1]: sshd@19-91.107.220.127:22-147.75.109.163:56546.service: Deactivated successfully. Oct 8 20:46:09.893146 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:46:09.894877 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:46:09.896503 systemd-logind[1474]: Removed session 20. Oct 8 20:46:13.313963 systemd[1]: run-containerd-runc-k8s.io-a8af9db2279d5ba7e6a815a50f79bdc661d8cd9209c4cb198792a9593245ad4c-runc.wEFryM.mount: Deactivated successfully. Oct 8 20:46:14.434025 systemd[1]: run-containerd-runc-k8s.io-42f60d568b9a18cf9bacb777ea1cc673a0ecb3b45c2f357b58782cce0891ca1c-runc.M3bdyA.mount: Deactivated successfully. Oct 8 20:46:15.065966 systemd[1]: Started sshd@20-91.107.220.127:22-147.75.109.163:56554.service - OpenSSH per-connection server daemon (147.75.109.163:56554). Oct 8 20:46:16.093106 sshd[6239]: Accepted publickey for core from 147.75.109.163 port 56554 ssh2: RSA SHA256:EoL9CaxDF85fjYsOpusG7uVqMgHBgsLS/taUdK2P4Zo Oct 8 20:46:16.096016 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:46:16.101151 systemd-logind[1474]: New session 21 of user core. Oct 8 20:46:16.105854 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:46:16.911217 sshd[6239]: pam_unix(sshd:session): session closed for user core Oct 8 20:46:16.915747 systemd[1]: sshd@20-91.107.220.127:22-147.75.109.163:56554.service: Deactivated successfully. Oct 8 20:46:16.918787 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:46:16.919565 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:46:16.920762 systemd-logind[1474]: Removed session 21. Oct 8 20:46:18.380967 systemd[1]: Started sshd@21-91.107.220.127:22-47.239.254.162:39822.service - OpenSSH per-connection server daemon (47.239.254.162:39822). Oct 8 20:46:20.016102 sshd[6251]: Connection closed by authenticating user root 47.239.254.162 port 39822 [preauth] Oct 8 20:46:20.018225 systemd[1]: sshd@21-91.107.220.127:22-47.239.254.162:39822.service: Deactivated successfully.