Dec 13 02:32:33.858508 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:32:33.858530 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:32:33.858538 kernel: BIOS-provided physical RAM map: Dec 13 02:32:33.858544 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:32:33.858548 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:32:33.858553 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:32:33.858559 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 02:32:33.858564 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 02:32:33.858571 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 02:32:33.858576 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 02:32:33.858581 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:32:33.858586 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:32:33.858591 kernel: NX (Execute Disable) protection: active Dec 13 02:32:33.858597 kernel: APIC: Static calls initialized Dec 13 02:32:33.858605 kernel: SMBIOS 2.8 present. Dec 13 02:32:33.858611 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 02:32:33.858616 kernel: Hypervisor detected: KVM Dec 13 02:32:33.858622 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:32:33.858627 kernel: kvm-clock: using sched offset of 2836166803 cycles Dec 13 02:32:33.858633 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:32:33.858639 kernel: tsc: Detected 2445.406 MHz processor Dec 13 02:32:33.858645 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:32:33.858650 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:32:33.858658 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 02:32:33.858664 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:32:33.858669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:32:33.858675 kernel: Using GB pages for direct mapping Dec 13 02:32:33.858680 kernel: ACPI: Early table checksum verification disabled Dec 13 02:32:33.858686 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 02:32:33.859051 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859064 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859070 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859079 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 02:32:33.859085 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859091 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859118 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859123 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:32:33.859129 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 02:32:33.859135 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 02:32:33.859140 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 02:32:33.859152 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 02:32:33.859158 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 02:32:33.859164 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 02:32:33.859170 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 02:32:33.859175 kernel: No NUMA configuration found Dec 13 02:32:33.859181 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 02:32:33.859189 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 02:32:33.859195 kernel: Zone ranges: Dec 13 02:32:33.859201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:32:33.859207 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 02:32:33.859213 kernel: Normal empty Dec 13 02:32:33.859218 kernel: Movable zone start for each node Dec 13 02:32:33.859224 kernel: Early memory node ranges Dec 13 02:32:33.859230 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:32:33.859236 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 02:32:33.859242 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 02:32:33.859250 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:32:33.859255 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:32:33.859261 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 02:32:33.859267 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:32:33.859273 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:32:33.859278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:32:33.859284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:32:33.859290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:32:33.859296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:32:33.859304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:32:33.859309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:32:33.859315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:32:33.859321 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:32:33.859327 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:32:33.859333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:32:33.859338 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 02:32:33.859344 kernel: Booting paravirtualized kernel on KVM Dec 13 02:32:33.859350 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:32:33.859358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:32:33.859364 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:32:33.859370 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:32:33.859375 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:32:33.859381 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:32:33.859388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:32:33.859394 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:32:33.859400 kernel: random: crng init done Dec 13 02:32:33.859408 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:32:33.859414 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:32:33.859420 kernel: Fallback order for Node 0: 0 Dec 13 02:32:33.859425 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 02:32:33.859431 kernel: Policy zone: DMA32 Dec 13 02:32:33.859437 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:32:33.859443 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:32:33.859449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:32:33.859455 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:32:33.859462 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:32:33.859468 kernel: Dynamic Preempt: voluntary Dec 13 02:32:33.859474 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:32:33.859481 kernel: rcu: RCU event tracing is enabled. Dec 13 02:32:33.859487 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:32:33.859493 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:32:33.859499 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:32:33.859505 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:32:33.859511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:32:33.859519 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:32:33.859524 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:32:33.859530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:32:33.859536 kernel: Console: colour VGA+ 80x25 Dec 13 02:32:33.859542 kernel: printk: console [tty0] enabled Dec 13 02:32:33.859547 kernel: printk: console [ttyS0] enabled Dec 13 02:32:33.859553 kernel: ACPI: Core revision 20230628 Dec 13 02:32:33.859559 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 02:32:33.859565 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:32:33.859573 kernel: x2apic enabled Dec 13 02:32:33.859579 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:32:33.859584 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:32:33.859590 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:32:33.859596 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Dec 13 02:32:33.859602 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 02:32:33.859608 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 02:32:33.859614 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 02:32:33.859620 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:32:33.859634 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:32:33.859640 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:32:33.859646 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:32:33.859655 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 02:32:33.859661 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 02:32:33.859667 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:32:33.859673 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 02:32:33.859679 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 02:32:33.859686 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 02:32:33.859692 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 02:32:33.859699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:32:33.859707 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:32:33.859713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:32:33.859719 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:32:33.859725 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 02:32:33.859732 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:32:33.859740 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:32:33.859746 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:32:33.859752 kernel: landlock: Up and running. Dec 13 02:32:33.859758 kernel: SELinux: Initializing. Dec 13 02:32:33.859764 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:32:33.859770 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:32:33.859776 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 02:32:33.859782 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:32:33.859789 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:32:33.859797 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:32:33.859803 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 02:32:33.859809 kernel: ... version: 0 Dec 13 02:32:33.859815 kernel: ... bit width: 48 Dec 13 02:32:33.859821 kernel: ... generic registers: 6 Dec 13 02:32:33.859827 kernel: ... value mask: 0000ffffffffffff Dec 13 02:32:33.859834 kernel: ... max period: 00007fffffffffff Dec 13 02:32:33.859840 kernel: ... fixed-purpose events: 0 Dec 13 02:32:33.859846 kernel: ... event mask: 000000000000003f Dec 13 02:32:33.859854 kernel: signal: max sigframe size: 1776 Dec 13 02:32:33.859860 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:32:33.859866 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:32:33.859872 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:32:33.859878 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:32:33.859884 kernel: .... node #0, CPUs: #1 Dec 13 02:32:33.859890 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:32:33.859896 kernel: smpboot: Max logical packages: 1 Dec 13 02:32:33.859903 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Dec 13 02:32:33.859911 kernel: devtmpfs: initialized Dec 13 02:32:33.859917 kernel: x86/mm: Memory block size: 128MB Dec 13 02:32:33.859923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:32:33.859929 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:32:33.859935 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:32:33.859941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:32:33.859947 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:32:33.859955 kernel: audit: type=2000 audit(1734057152.525:1): state=initialized audit_enabled=0 res=1 Dec 13 02:32:33.859966 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:32:33.859981 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:32:33.859993 kernel: cpuidle: using governor menu Dec 13 02:32:33.860003 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:32:33.860010 kernel: dca service started, version 1.12.1 Dec 13 02:32:33.860016 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 02:32:33.860022 kernel: PCI: Using configuration type 1 for base access Dec 13 02:32:33.860028 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:32:33.860035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:32:33.860041 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 02:32:33.860049 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:32:33.860055 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:32:33.860061 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:32:33.860068 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:32:33.860074 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:32:33.860080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:32:33.860086 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:32:33.860092 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:32:33.860112 kernel: ACPI: Interpreter enabled Dec 13 02:32:33.860134 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:32:33.860140 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:32:33.860146 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:32:33.860152 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:32:33.860159 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 02:32:33.860165 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:32:33.860324 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:32:33.860438 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 02:32:33.860547 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 02:32:33.860557 kernel: PCI host bridge to bus 0000:00 Dec 13 02:32:33.860664 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:32:33.860760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:32:33.860853 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:32:33.860946 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 02:32:33.861067 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 02:32:33.861195 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 02:32:33.861290 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:32:33.861440 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 02:32:33.861553 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:32:33.861656 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 02:32:33.861758 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 02:32:33.861865 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 02:32:33.861989 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 02:32:33.862126 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:32:33.862242 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.862346 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 02:32:33.862456 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.862558 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 02:32:33.862672 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.862774 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 02:32:33.862882 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.863008 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 02:32:33.863158 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.863271 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 02:32:33.863381 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.863483 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 02:32:33.863595 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.863698 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 02:32:33.863807 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.863910 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 02:32:33.864054 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:32:33.865017 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 02:32:33.865156 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 02:32:33.865263 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 02:32:33.865398 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 02:32:33.865502 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 02:32:33.865609 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 02:32:33.865719 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 02:32:33.865819 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 02:32:33.867222 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:32:33.867342 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 02:32:33.867451 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 02:32:33.867559 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 02:32:33.867669 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:32:33.867772 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 02:32:33.867873 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:32:33.868087 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 02:32:33.871480 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 02:32:33.871610 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:32:33.871730 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 02:32:33.871839 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:32:33.871977 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 02:32:33.872144 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 02:32:33.872263 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 02:32:33.872375 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:32:33.872482 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 02:32:33.872594 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:32:33.872721 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 02:32:33.872835 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 02:32:33.872950 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:32:33.873169 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 02:32:33.873366 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:32:33.873493 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 02:32:33.873608 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 02:32:33.873723 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:32:33.873827 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 02:32:33.873930 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:32:33.874072 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 02:32:33.874218 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 02:32:33.874330 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 02:32:33.874435 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:32:33.874544 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 02:32:33.874646 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:32:33.874655 kernel: acpiphp: Slot [0] registered Dec 13 02:32:33.874768 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:32:33.874876 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 02:32:33.875007 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 02:32:33.876206 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 02:32:33.876338 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:32:33.876483 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 02:32:33.876592 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:32:33.876601 kernel: acpiphp: Slot [0-2] registered Dec 13 02:32:33.876704 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:32:33.876807 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 02:32:33.876907 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:32:33.876916 kernel: acpiphp: Slot [0-3] registered Dec 13 02:32:33.877057 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:32:33.877190 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 02:32:33.877293 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:32:33.877324 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:32:33.877331 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:32:33.877338 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:32:33.877344 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:32:33.877350 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 02:32:33.877357 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 02:32:33.877367 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 02:32:33.877373 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 02:32:33.877380 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 02:32:33.877386 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 02:32:33.877392 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 02:32:33.877399 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 02:32:33.877406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 02:32:33.877412 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 02:32:33.877418 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 02:32:33.877426 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 02:32:33.877433 kernel: iommu: Default domain type: Translated Dec 13 02:32:33.877439 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:32:33.877445 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:32:33.877451 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:32:33.877458 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:32:33.877464 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 02:32:33.877570 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 02:32:33.877672 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 02:32:33.877777 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:32:33.877786 kernel: vgaarb: loaded Dec 13 02:32:33.877792 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 02:32:33.877799 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 02:32:33.877805 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:32:33.877812 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:32:33.877818 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:32:33.877825 kernel: pnp: PnP ACPI init Dec 13 02:32:33.877934 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 02:32:33.877953 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:32:33.877967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:32:33.877979 kernel: NET: Registered PF_INET protocol family Dec 13 02:32:33.877991 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:32:33.877999 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:32:33.878005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:32:33.878012 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:32:33.878018 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:32:33.878027 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:32:33.878034 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:32:33.878040 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:32:33.878046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:32:33.878053 kernel: NET: Registered PF_XDP protocol family Dec 13 02:32:33.883227 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 02:32:33.883345 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 02:32:33.883451 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 02:32:33.883562 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 02:32:33.883665 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 02:32:33.883768 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 02:32:33.883869 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:32:33.883997 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 02:32:33.884118 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:32:33.884224 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:32:33.884325 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 02:32:33.884432 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:32:33.884533 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:32:33.884633 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 02:32:33.884734 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:32:33.884835 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:32:33.884940 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 02:32:33.885068 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:32:33.889222 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:32:33.889383 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 02:32:33.889491 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:32:33.889595 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:32:33.889697 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 02:32:33.889799 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:32:33.889900 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:32:33.890030 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 02:32:33.890153 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 02:32:33.890259 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:32:33.890367 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:32:33.890469 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 02:32:33.890570 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 02:32:33.890671 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:32:33.890773 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:32:33.890873 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 02:32:33.891000 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 02:32:33.892161 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:32:33.892272 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:32:33.892368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:32:33.892467 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:32:33.892560 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 02:32:33.892652 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 02:32:33.892743 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 02:32:33.892848 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 02:32:33.892951 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 02:32:33.893091 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 02:32:33.893218 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 02:32:33.893355 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 02:32:33.893457 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 02:32:33.893563 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 02:32:33.893661 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 02:32:33.893766 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 02:32:33.893870 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 02:32:33.893998 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 02:32:33.897174 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 02:32:33.897296 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 02:32:33.897488 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 02:32:33.897624 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 02:32:33.897739 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 02:32:33.897841 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 02:32:33.897946 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 02:32:33.898071 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 02:32:33.898207 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 02:32:33.898307 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 02:32:33.898317 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 02:32:33.898328 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:32:33.898337 kernel: Initialise system trusted keyrings Dec 13 02:32:33.898343 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:32:33.898350 kernel: Key type asymmetric registered Dec 13 02:32:33.898357 kernel: Asymmetric key parser 'x509' registered Dec 13 02:32:33.898363 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:32:33.898370 kernel: io scheduler mq-deadline registered Dec 13 02:32:33.898376 kernel: io scheduler kyber registered Dec 13 02:32:33.898383 kernel: io scheduler bfq registered Dec 13 02:32:33.898488 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 02:32:33.898595 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 02:32:33.898697 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 02:32:33.898798 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 02:32:33.898900 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 02:32:33.899027 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 02:32:33.899160 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 02:32:33.899265 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 02:32:33.899369 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 02:32:33.899479 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 02:32:33.899582 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 02:32:33.899684 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 02:32:33.899785 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 02:32:33.899887 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 02:32:33.900017 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 02:32:33.900143 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 02:32:33.900153 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 02:32:33.900261 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 02:32:33.900363 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 02:32:33.900372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:32:33.900379 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 02:32:33.900386 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:32:33.900392 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:32:33.900399 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:32:33.900406 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:32:33.900412 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:32:33.900522 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 02:32:33.900533 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:32:33.900627 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 02:32:33.900722 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T02:32:33 UTC (1734057153) Dec 13 02:32:33.900817 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 02:32:33.900825 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:32:33.900832 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:32:33.900839 kernel: Segment Routing with IPv6 Dec 13 02:32:33.900849 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:32:33.900855 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:32:33.900862 kernel: Key type dns_resolver registered Dec 13 02:32:33.900869 kernel: IPI shorthand broadcast: enabled Dec 13 02:32:33.900876 kernel: sched_clock: Marking stable (1082007117, 133147555)->(1225119564, -9964892) Dec 13 02:32:33.900882 kernel: registered taskstats version 1 Dec 13 02:32:33.900889 kernel: Loading compiled-in X.509 certificates Dec 13 02:32:33.900896 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:32:33.900902 kernel: Key type .fscrypt registered Dec 13 02:32:33.900911 kernel: Key type fscrypt-provisioning registered Dec 13 02:32:33.900917 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:32:33.900926 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:32:33.900939 kernel: ima: No architecture policies found Dec 13 02:32:33.900952 kernel: clk: Disabling unused clocks Dec 13 02:32:33.900964 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:32:33.900976 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:32:33.900988 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:32:33.900995 kernel: Run /init as init process Dec 13 02:32:33.901004 kernel: with arguments: Dec 13 02:32:33.901011 kernel: /init Dec 13 02:32:33.901017 kernel: with environment: Dec 13 02:32:33.901024 kernel: HOME=/ Dec 13 02:32:33.901030 kernel: TERM=linux Dec 13 02:32:33.901037 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:32:33.901048 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:32:33.901057 systemd[1]: Detected virtualization kvm. Dec 13 02:32:33.901066 systemd[1]: Detected architecture x86-64. Dec 13 02:32:33.901073 systemd[1]: Running in initrd. Dec 13 02:32:33.901080 systemd[1]: No hostname configured, using default hostname. Dec 13 02:32:33.901086 systemd[1]: Hostname set to . Dec 13 02:32:33.901115 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:32:33.901123 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:32:33.901131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:32:33.901138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:32:33.901148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:32:33.901155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:32:33.901162 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:32:33.901169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:32:33.901178 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:32:33.901185 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:32:33.901194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:32:33.901201 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:32:33.901208 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:32:33.901215 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:32:33.901222 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:32:33.901228 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:32:33.901235 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:32:33.901242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:32:33.901249 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:32:33.901258 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:32:33.901265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:32:33.901272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:32:33.901279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:32:33.901286 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:32:33.901292 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:32:33.901318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:32:33.901328 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:32:33.901337 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:32:33.901344 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:32:33.901351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:32:33.901358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:32:33.901365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:32:33.901393 systemd-journald[187]: Collecting audit messages is disabled. Dec 13 02:32:33.901413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:32:33.901420 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:32:33.901428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:32:33.901437 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:32:33.901444 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:32:33.901451 kernel: Bridge firewalling registered Dec 13 02:32:33.901458 systemd-journald[187]: Journal started Dec 13 02:32:33.901473 systemd-journald[187]: Runtime Journal (/run/log/journal/379c7626d07548b6b484af967a0cbebf) is 4.8M, max 38.4M, 33.6M free. Dec 13 02:32:33.868294 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 02:32:33.934308 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:32:33.897580 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 02:32:33.934895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:32:33.935772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:33.942208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:32:33.944218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:32:33.953883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:32:33.956522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:32:33.962298 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:32:33.966442 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:32:33.969383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:32:33.973198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:32:33.977681 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:32:33.980308 dracut-cmdline[216]: dracut-dracut-053 Dec 13 02:32:33.983264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:32:33.984527 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:32:34.011824 systemd-resolved[228]: Positive Trust Anchors: Dec 13 02:32:34.012263 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:32:34.012291 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:32:34.017918 systemd-resolved[228]: Defaulting to hostname 'linux'. Dec 13 02:32:34.019005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:32:34.019594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:32:34.054143 kernel: SCSI subsystem initialized Dec 13 02:32:34.062115 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:32:34.072122 kernel: iscsi: registered transport (tcp) Dec 13 02:32:34.091149 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:32:34.091227 kernel: QLogic iSCSI HBA Driver Dec 13 02:32:34.136346 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:32:34.143317 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:32:34.170459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:32:34.170525 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:32:34.173471 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:32:34.217133 kernel: raid6: avx2x4 gen() 32960 MB/s Dec 13 02:32:34.234123 kernel: raid6: avx2x2 gen() 30848 MB/s Dec 13 02:32:34.251210 kernel: raid6: avx2x1 gen() 26188 MB/s Dec 13 02:32:34.251271 kernel: raid6: using algorithm avx2x4 gen() 32960 MB/s Dec 13 02:32:34.269361 kernel: raid6: .... xor() 4775 MB/s, rmw enabled Dec 13 02:32:34.269431 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:32:34.288137 kernel: xor: automatically using best checksumming function avx Dec 13 02:32:34.415130 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:32:34.425752 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:32:34.430252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:32:34.443576 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 13 02:32:34.447357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:32:34.455288 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:32:34.466489 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Dec 13 02:32:34.496248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:32:34.510230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:32:34.573735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:32:34.581276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:32:34.596394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:32:34.598696 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:32:34.599179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:32:34.600143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:32:34.609378 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:32:34.622531 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:32:34.655118 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:32:34.659509 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:32:34.663120 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 02:32:34.689942 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:32:34.690122 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:32:34.690735 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:32:34.695146 kernel: libata version 3.00 loaded. Dec 13 02:32:34.693058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:32:34.693200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:34.693694 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:32:34.747818 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 02:32:34.773829 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 02:32:34.773846 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 02:32:34.773987 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 02:32:34.774145 kernel: scsi host1: ahci Dec 13 02:32:34.774283 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:32:34.774293 kernel: AES CTR mode by8 optimization enabled Dec 13 02:32:34.774308 kernel: scsi host2: ahci Dec 13 02:32:34.774443 kernel: ACPI: bus type USB registered Dec 13 02:32:34.774453 kernel: usbcore: registered new interface driver usbfs Dec 13 02:32:34.774462 kernel: usbcore: registered new interface driver hub Dec 13 02:32:34.774470 kernel: usbcore: registered new device driver usb Dec 13 02:32:34.774479 kernel: scsi host3: ahci Dec 13 02:32:34.774600 kernel: scsi host4: ahci Dec 13 02:32:34.774721 kernel: scsi host5: ahci Dec 13 02:32:34.774845 kernel: scsi host6: ahci Dec 13 02:32:34.774967 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 13 02:32:34.774977 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 13 02:32:34.774986 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 13 02:32:34.774994 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 13 02:32:34.775003 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 13 02:32:34.775011 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 13 02:32:34.732920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:32:34.819339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:34.826246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:32:34.844086 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:32:35.086490 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:32:35.086581 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:32:35.086605 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 02:32:35.086625 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 02:32:35.086644 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 02:32:35.089318 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:32:35.089359 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 02:32:35.091171 kernel: ata1.00: applying bridge limits Dec 13 02:32:35.092153 kernel: ata1.00: configured for UDMA/100 Dec 13 02:32:35.095136 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:32:35.130127 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:32:35.161650 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 02:32:35.161870 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 02:32:35.162124 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 02:32:35.162358 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 02:32:35.162547 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:32:35.162759 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:32:35.162947 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 02:32:35.163523 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 02:32:35.163730 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:32:35.163939 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 02:32:35.164159 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:32:35.164175 kernel: hub 1-0:1.0: USB hub found Dec 13 02:32:35.164403 kernel: GPT:17805311 != 80003071 Dec 13 02:32:35.164424 kernel: hub 1-0:1.0: 4 ports detected Dec 13 02:32:35.164624 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:32:35.164640 kernel: GPT:17805311 != 80003071 Dec 13 02:32:35.164653 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 02:32:35.164933 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:32:35.164949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:32:35.164963 kernel: hub 2-0:1.0: USB hub found Dec 13 02:32:35.165398 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:32:35.165611 kernel: hub 2-0:1.0: 4 ports detected Dec 13 02:32:35.171609 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 02:32:35.184688 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:32:35.184710 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:32:35.200129 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (455) Dec 13 02:32:35.200177 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (452) Dec 13 02:32:35.201522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 02:32:35.208562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 02:32:35.213872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:32:35.218604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 02:32:35.219735 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 02:32:35.226225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:32:35.230861 disk-uuid[577]: Primary Header is updated. Dec 13 02:32:35.230861 disk-uuid[577]: Secondary Entries is updated. Dec 13 02:32:35.230861 disk-uuid[577]: Secondary Header is updated. Dec 13 02:32:35.237122 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:32:35.243121 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:32:35.383122 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 02:32:35.520274 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:32:35.525750 kernel: usbcore: registered new interface driver usbhid Dec 13 02:32:35.525780 kernel: usbhid: USB HID core driver Dec 13 02:32:35.531784 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 02:32:35.531813 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 02:32:36.245133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:32:36.245598 disk-uuid[578]: The operation has completed successfully. Dec 13 02:32:36.288474 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:32:36.288594 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:32:36.301236 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:32:36.304767 sh[596]: Success Dec 13 02:32:36.317123 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 02:32:36.363200 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:32:36.376210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:32:36.378313 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:32:36.392980 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:32:36.393044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:32:36.393055 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:32:36.395893 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:32:36.395916 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:32:36.405120 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 02:32:36.406083 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:32:36.407052 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:32:36.416199 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:32:36.418824 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:32:36.432944 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:32:36.432973 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:32:36.432983 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:32:36.437338 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:32:36.437360 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:32:36.448782 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:32:36.449582 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:32:36.458340 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:32:36.464250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:32:36.536470 ignition[690]: Ignition 2.19.0 Dec 13 02:32:36.537183 ignition[690]: Stage: fetch-offline Dec 13 02:32:36.537250 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:36.537263 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:36.537588 ignition[690]: parsed url from cmdline: "" Dec 13 02:32:36.537594 ignition[690]: no config URL provided Dec 13 02:32:36.537604 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:32:36.541788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:32:36.538072 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:32:36.538079 ignition[690]: failed to fetch config: resource requires networking Dec 13 02:32:36.538317 ignition[690]: Ignition finished successfully Dec 13 02:32:36.555527 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:32:36.561225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:32:36.580836 systemd-networkd[783]: lo: Link UP Dec 13 02:32:36.580846 systemd-networkd[783]: lo: Gained carrier Dec 13 02:32:36.583325 systemd-networkd[783]: Enumeration completed Dec 13 02:32:36.583510 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:32:36.584105 systemd[1]: Reached target network.target - Network. Dec 13 02:32:36.585014 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:36.585019 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:32:36.587927 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:36.587931 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:32:36.588231 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:32:36.588869 systemd-networkd[783]: eth0: Link UP Dec 13 02:32:36.588873 systemd-networkd[783]: eth0: Gained carrier Dec 13 02:32:36.588880 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:36.593383 systemd-networkd[783]: eth1: Link UP Dec 13 02:32:36.593387 systemd-networkd[783]: eth1: Gained carrier Dec 13 02:32:36.593394 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:36.601223 ignition[785]: Ignition 2.19.0 Dec 13 02:32:36.601258 ignition[785]: Stage: fetch Dec 13 02:32:36.601480 ignition[785]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:36.601491 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:36.601583 ignition[785]: parsed url from cmdline: "" Dec 13 02:32:36.601587 ignition[785]: no config URL provided Dec 13 02:32:36.601592 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:32:36.601600 ignition[785]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:32:36.601619 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 02:32:36.601744 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 02:32:36.628170 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:32:36.707169 systemd-networkd[783]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:32:36.801983 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 02:32:36.807159 ignition[785]: GET result: OK Dec 13 02:32:36.807266 ignition[785]: parsing config with SHA512: a06b328bc9f3e4a8dd79c7ac777152ba968489a5cc3ff05a7797e265ab6c59d90bdb43750017de5ebedc2b90444e8ae19d3a6150cbc827581b22b3b1e5bd5f10 Dec 13 02:32:36.811434 unknown[785]: fetched base config from "system" Dec 13 02:32:36.811447 unknown[785]: fetched base config from "system" Dec 13 02:32:36.811819 ignition[785]: fetch: fetch complete Dec 13 02:32:36.811454 unknown[785]: fetched user config from "hetzner" Dec 13 02:32:36.811825 ignition[785]: fetch: fetch passed Dec 13 02:32:36.815759 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:32:36.811872 ignition[785]: Ignition finished successfully Dec 13 02:32:36.823319 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:32:36.840561 ignition[792]: Ignition 2.19.0 Dec 13 02:32:36.840575 ignition[792]: Stage: kargs Dec 13 02:32:36.840742 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:36.840755 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:36.841650 ignition[792]: kargs: kargs passed Dec 13 02:32:36.843774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:32:36.841698 ignition[792]: Ignition finished successfully Dec 13 02:32:36.859237 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:32:36.871945 ignition[799]: Ignition 2.19.0 Dec 13 02:32:36.872661 ignition[799]: Stage: disks Dec 13 02:32:36.872811 ignition[799]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:36.872822 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:36.873552 ignition[799]: disks: disks passed Dec 13 02:32:36.875985 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:32:36.873595 ignition[799]: Ignition finished successfully Dec 13 02:32:36.877167 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:32:36.877779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:32:36.878389 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:32:36.879220 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:32:36.880110 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:32:36.887287 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:32:36.903663 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:32:36.907265 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:32:36.915205 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:32:36.996127 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:32:36.996836 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:32:36.997817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:32:37.004162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:32:37.008053 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:32:37.010932 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 02:32:37.012929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:32:37.012960 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:32:37.016881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:32:37.019157 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Dec 13 02:32:37.019181 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:32:37.021227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:32:37.021295 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:32:37.026895 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:32:37.026932 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:32:37.034182 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:32:37.038118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:32:37.083811 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:32:37.085187 coreos-metadata[817]: Dec 13 02:32:37.084 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 02:32:37.086087 coreos-metadata[817]: Dec 13 02:32:37.085 INFO Fetch successful Dec 13 02:32:37.086087 coreos-metadata[817]: Dec 13 02:32:37.085 INFO wrote hostname ci-4081-2-1-b-5cf67d135c to /sysroot/etc/hostname Dec 13 02:32:37.087614 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:32:37.092156 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:32:37.096330 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:32:37.101352 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:32:37.191451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:32:37.200246 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:32:37.205061 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:32:37.210130 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:32:37.230476 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:32:37.237562 ignition[931]: INFO : Ignition 2.19.0 Dec 13 02:32:37.238263 ignition[931]: INFO : Stage: mount Dec 13 02:32:37.238263 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:37.238263 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:37.239870 ignition[931]: INFO : mount: mount passed Dec 13 02:32:37.239870 ignition[931]: INFO : Ignition finished successfully Dec 13 02:32:37.240402 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:32:37.247211 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:32:37.391132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:32:37.395298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:32:37.406126 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Dec 13 02:32:37.408376 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:32:37.408400 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:32:37.410390 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:32:37.414232 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:32:37.414257 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:32:37.418201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:32:37.442127 ignition[959]: INFO : Ignition 2.19.0 Dec 13 02:32:37.443840 ignition[959]: INFO : Stage: files Dec 13 02:32:37.444287 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:37.444287 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:37.445598 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:32:37.446575 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:32:37.446575 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:32:37.451198 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:32:37.452102 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:32:37.453112 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 02:32:37.453835 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:32:37.456712 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:32:37.456712 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:32:37.551921 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:32:37.742484 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:32:37.742484 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:32:37.744858 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:32:38.195427 systemd-networkd[783]: eth1: Gained IPv6LL Dec 13 02:32:38.273429 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 02:32:38.324775 systemd-networkd[783]: eth0: Gained IPv6LL Dec 13 02:32:38.526384 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:32:38.526384 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:32:38.529362 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:32:38.529362 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:32:38.529362 ignition[959]: INFO : files: files passed Dec 13 02:32:38.529362 ignition[959]: INFO : Ignition finished successfully Dec 13 02:32:38.530728 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:32:38.542294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:32:38.545228 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:32:38.547168 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:32:38.547282 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:32:38.563754 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:32:38.563754 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:32:38.566167 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:32:38.567395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:32:38.569031 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:32:38.575231 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:32:38.596126 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:32:38.596252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:32:38.598454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:32:38.598940 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:32:38.599974 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:32:38.602232 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:32:38.614980 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:32:38.619242 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:32:38.628383 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:32:38.628950 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:32:38.630019 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:32:38.630994 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:32:38.631110 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:32:38.632198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:32:38.632795 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:32:38.633799 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:32:38.634795 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:32:38.635868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:32:38.637116 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:32:38.638416 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:32:38.639670 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:32:38.640867 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:32:38.642165 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:32:38.643495 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:32:38.643620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:32:38.644972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:32:38.645729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:32:38.646814 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:32:38.647587 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:32:38.648892 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:32:38.648991 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:32:38.650624 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:32:38.650726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:32:38.651450 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:32:38.651592 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:32:38.652722 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:32:38.652862 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:32:38.660478 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:32:38.661152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:32:38.661421 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:32:38.666307 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:32:38.666907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:32:38.667058 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:32:38.670275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:32:38.670397 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:32:38.678092 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:32:38.678222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:32:38.692128 ignition[1013]: INFO : Ignition 2.19.0 Dec 13 02:32:38.692128 ignition[1013]: INFO : Stage: umount Dec 13 02:32:38.692128 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:32:38.692128 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:32:38.696780 ignition[1013]: INFO : umount: umount passed Dec 13 02:32:38.696780 ignition[1013]: INFO : Ignition finished successfully Dec 13 02:32:38.694766 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:32:38.698427 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:32:38.698547 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:32:38.699577 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:32:38.699624 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:32:38.700580 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:32:38.700626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:32:38.701530 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:32:38.701573 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:32:38.702449 systemd[1]: Stopped target network.target - Network. Dec 13 02:32:38.703302 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:32:38.703351 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:32:38.704236 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:32:38.705076 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:32:38.709147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:32:38.709713 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:32:38.710811 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:32:38.711759 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:32:38.711800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:32:38.712619 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:32:38.712660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:32:38.713496 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:32:38.713541 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:32:38.714395 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:32:38.714439 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:32:38.715398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:32:38.716278 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:32:38.717470 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:32:38.717563 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:32:38.718588 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:32:38.718680 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:32:38.719221 systemd-networkd[783]: eth1: DHCPv6 lease lost Dec 13 02:32:38.723169 systemd-networkd[783]: eth0: DHCPv6 lease lost Dec 13 02:32:38.724480 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:32:38.724704 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:32:38.726043 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:32:38.726171 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:32:38.728943 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:32:38.729002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:32:38.735224 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:32:38.735752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:32:38.735809 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:32:38.736349 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:32:38.736394 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:32:38.736903 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:32:38.736959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:32:38.737987 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:32:38.738032 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:32:38.739179 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:32:38.750775 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:32:38.751442 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:32:38.752794 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:32:38.752974 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:32:38.754481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:32:38.754540 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:32:38.755599 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:32:38.755636 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:32:38.756562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:32:38.756608 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:32:38.758049 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:32:38.758107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:32:38.759123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:32:38.759196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:32:38.767448 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:32:38.767911 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:32:38.767962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:32:38.768469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:32:38.768512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:38.775396 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:32:38.775496 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:32:38.776818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:32:38.783220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:32:38.789354 systemd[1]: Switching root. Dec 13 02:32:38.821213 systemd-journald[187]: Journal stopped Dec 13 02:32:39.745546 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 13 02:32:39.745611 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:32:39.745631 kernel: SELinux: policy capability open_perms=1 Dec 13 02:32:39.745644 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:32:39.745653 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:32:39.745667 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:32:39.745676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:32:39.745685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:32:39.745694 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:32:39.745707 kernel: audit: type=1403 audit(1734057158.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:32:39.745724 systemd[1]: Successfully loaded SELinux policy in 41.814ms. Dec 13 02:32:39.745742 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.974ms. Dec 13 02:32:39.745753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:32:39.745763 systemd[1]: Detected virtualization kvm. Dec 13 02:32:39.745773 systemd[1]: Detected architecture x86-64. Dec 13 02:32:39.745784 systemd[1]: Detected first boot. Dec 13 02:32:39.745804 systemd[1]: Hostname set to . Dec 13 02:32:39.745835 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:32:39.745861 zram_generator::config[1056]: No configuration found. Dec 13 02:32:39.745872 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:32:39.745883 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:32:39.745893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:32:39.745903 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:32:39.745913 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:32:39.745924 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:32:39.745933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:32:39.745946 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:32:39.745956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:32:39.745966 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:32:39.745976 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:32:39.745986 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:32:39.745996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:32:39.746006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:32:39.746016 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:32:39.746026 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:32:39.746038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:32:39.746049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:32:39.746059 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:32:39.746068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:32:39.746078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:32:39.746088 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:32:39.747455 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:32:39.747471 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:32:39.747483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:32:39.747493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:32:39.747504 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:32:39.747514 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:32:39.747523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:32:39.747533 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:32:39.747543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:32:39.747556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:32:39.747914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:32:39.747929 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:32:39.747939 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:32:39.747949 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:32:39.747976 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:32:39.747988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:39.747998 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:32:39.748015 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:32:39.748031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:32:39.748042 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:32:39.748051 systemd[1]: Reached target machines.target - Containers. Dec 13 02:32:39.748063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:32:39.748073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:32:39.748086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:32:39.748109 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:32:39.748129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:32:39.748140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:32:39.748150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:32:39.748160 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:32:39.748169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:32:39.748179 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:32:39.748189 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:32:39.748202 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:32:39.748212 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:32:39.748222 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:32:39.748232 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:32:39.748242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:32:39.748252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:32:39.748262 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:32:39.748272 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:32:39.748282 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:32:39.748294 systemd[1]: Stopped verity-setup.service. Dec 13 02:32:39.748322 systemd-journald[1139]: Collecting audit messages is disabled. Dec 13 02:32:39.748342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:39.748352 kernel: fuse: init (API version 7.39) Dec 13 02:32:39.748363 systemd-journald[1139]: Journal started Dec 13 02:32:39.748381 systemd-journald[1139]: Runtime Journal (/run/log/journal/379c7626d07548b6b484af967a0cbebf) is 4.8M, max 38.4M, 33.6M free. Dec 13 02:32:39.496188 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:32:39.514714 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 02:32:39.515296 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:32:39.757116 kernel: loop: module loaded Dec 13 02:32:39.771394 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:32:39.769995 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:32:39.770539 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:32:39.771066 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:32:39.771679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:32:39.772396 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:32:39.773173 kernel: ACPI: bus type drm_connector registered Dec 13 02:32:39.774527 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:32:39.775307 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:32:39.776087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:32:39.776864 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:32:39.777012 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:32:39.777968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:39.778148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:32:39.778935 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:32:39.779236 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:32:39.779951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:39.780145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:32:39.780892 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:32:39.781066 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:32:39.781883 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:39.782027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:32:39.782786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:32:39.783527 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:32:39.784313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:32:39.796337 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:32:39.802709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:32:39.812290 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:32:39.812803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:32:39.812831 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:32:39.815527 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:32:39.821876 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:32:39.826187 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:32:39.826746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:32:39.836277 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:32:39.839193 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:32:39.839779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:32:39.842231 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:32:39.846278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:32:39.851763 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:32:39.854864 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:32:39.867588 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:32:39.873479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:32:39.875312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:32:39.876266 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:32:39.897952 systemd-journald[1139]: Time spent on flushing to /var/log/journal/379c7626d07548b6b484af967a0cbebf is 23.905ms for 1133 entries. Dec 13 02:32:39.897952 systemd-journald[1139]: System Journal (/var/log/journal/379c7626d07548b6b484af967a0cbebf) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:32:39.939316 systemd-journald[1139]: Received client request to flush runtime journal. Dec 13 02:32:39.939351 kernel: loop0: detected capacity change from 0 to 8 Dec 13 02:32:39.895916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:32:39.906912 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:32:39.908807 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:32:39.921611 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:32:39.932241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:32:39.944543 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:32:39.954948 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:32:39.969882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:32:39.974238 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:32:39.976197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:32:39.984793 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 02:32:39.987707 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:32:39.993894 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:32:40.004435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:32:40.038132 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 02:32:40.045356 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 02:32:40.045768 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 02:32:40.056415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:32:40.087184 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 02:32:40.127387 kernel: loop4: detected capacity change from 0 to 8 Dec 13 02:32:40.127459 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 02:32:40.153139 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 02:32:40.179267 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 02:32:40.202276 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 02:32:40.202837 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 13 02:32:40.207437 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:32:40.207450 systemd[1]: Reloading... Dec 13 02:32:40.303147 zram_generator::config[1227]: No configuration found. Dec 13 02:32:40.337284 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:32:40.409753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:40.449843 systemd[1]: Reloading finished in 241 ms. Dec 13 02:32:40.476782 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:32:40.477651 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:32:40.478397 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:32:40.493279 systemd[1]: Starting ensure-sysext.service... Dec 13 02:32:40.495249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:32:40.504318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:32:40.510236 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:32:40.510249 systemd[1]: Reloading... Dec 13 02:32:40.515975 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:32:40.516331 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:32:40.517452 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:32:40.517794 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 13 02:32:40.517907 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 13 02:32:40.521910 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:32:40.521923 systemd-tmpfiles[1272]: Skipping /boot Dec 13 02:32:40.534198 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:32:40.534208 systemd-tmpfiles[1272]: Skipping /boot Dec 13 02:32:40.545899 systemd-udevd[1273]: Using default interface naming scheme 'v255'. Dec 13 02:32:40.595167 zram_generator::config[1300]: No configuration found. Dec 13 02:32:40.686137 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1316) Dec 13 02:32:40.692140 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1316) Dec 13 02:32:40.756156 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1322) Dec 13 02:32:40.758129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:32:40.777174 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:32:40.782641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:40.796165 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:32:40.848072 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:32:40.848540 systemd[1]: Reloading finished in 337 ms. Dec 13 02:32:40.856122 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 02:32:40.867509 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 02:32:40.872251 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 02:32:40.872494 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 02:32:40.872626 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 02:32:40.872777 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:32:40.872795 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:32:40.872809 kernel: [drm] features: -context_init Dec 13 02:32:40.868618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:32:40.879976 kernel: [drm] number of scanouts: 1 Dec 13 02:32:40.880021 kernel: [drm] number of cap sets: 0 Dec 13 02:32:40.880034 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 02:32:40.880048 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:32:40.877162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:32:40.920124 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:32:40.925907 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:32:40.925956 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 02:32:40.926069 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 02:32:40.932882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:32:40.935767 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:32:40.942474 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:40.951962 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:32:40.957304 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:32:40.959376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:32:40.961722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:32:40.965430 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:32:40.970069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:32:40.970272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:32:40.972476 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:32:40.977860 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:32:40.986327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:32:40.988981 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:32:40.994016 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:32:40.997139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:32:40.998256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:41.001296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:41.002144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:32:41.004148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:41.007506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:32:41.008230 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:41.009361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:32:41.023559 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:41.024651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:32:41.028281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:32:41.030282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:32:41.035772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:32:41.043715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:32:41.045855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:32:41.050336 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:32:41.051822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:41.054286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:32:41.056682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:41.056831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:32:41.057525 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:32:41.057674 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:32:41.059651 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:41.059797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:32:41.066544 systemd[1]: Finished ensure-sysext.service. Dec 13 02:32:41.070564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:41.070718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:32:41.086906 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:32:41.097786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:32:41.097909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:32:41.107357 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:32:41.110403 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:32:41.112639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:32:41.112824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:41.121302 augenrules[1426]: No rules Dec 13 02:32:41.123393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:32:41.128512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:32:41.130877 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:32:41.138840 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:32:41.148660 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:32:41.159688 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:32:41.187084 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:32:41.204765 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:32:41.209841 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:32:41.210964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:32:41.228245 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:32:41.235583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:32:41.244911 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:32:41.258164 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:32:41.260559 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:32:41.275294 systemd-networkd[1394]: lo: Link UP Dec 13 02:32:41.275570 systemd-networkd[1394]: lo: Gained carrier Dec 13 02:32:41.278034 systemd-networkd[1394]: Enumeration completed Dec 13 02:32:41.278318 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:32:41.280501 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:41.280508 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:32:41.281628 systemd-networkd[1394]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:41.281632 systemd-networkd[1394]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:32:41.284047 systemd-networkd[1394]: eth0: Link UP Dec 13 02:32:41.284165 systemd-networkd[1394]: eth0: Gained carrier Dec 13 02:32:41.284234 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:41.288336 systemd-networkd[1394]: eth1: Link UP Dec 13 02:32:41.288400 systemd-networkd[1394]: eth1: Gained carrier Dec 13 02:32:41.288459 systemd-networkd[1394]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:32:41.289338 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:32:41.289944 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:32:41.290640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:32:41.292146 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:32:41.294526 systemd-resolved[1395]: Positive Trust Anchors: Dec 13 02:32:41.294784 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:32:41.294855 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:32:41.301871 systemd-resolved[1395]: Using system hostname 'ci-4081-2-1-b-5cf67d135c'. Dec 13 02:32:41.303702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:32:41.304322 systemd[1]: Reached target network.target - Network. Dec 13 02:32:41.304732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:32:41.309638 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:32:41.310199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:32:41.310654 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:32:41.311251 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:32:41.311750 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:32:41.312435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:32:41.312949 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:32:41.313024 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:32:41.313529 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:32:41.315156 systemd-networkd[1394]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:32:41.316778 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Dec 13 02:32:41.319509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:32:41.324400 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:32:41.330977 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:32:41.332396 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:32:41.332896 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:32:41.334350 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:32:41.334784 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:32:41.334810 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:32:41.336244 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:32:41.340260 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:32:41.343565 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:32:41.348021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:32:41.354261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:32:41.356640 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:32:41.358983 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:32:41.362074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:32:41.368356 jq[1463]: false Dec 13 02:32:41.370344 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 02:32:41.378267 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:32:41.389320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:32:41.395498 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:32:41.396634 coreos-metadata[1461]: Dec 13 02:32:41.396 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 02:32:41.396920 coreos-metadata[1461]: Dec 13 02:32:41.396 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Dec 13 02:32:41.397798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:32:41.399593 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:32:41.405310 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:32:41.409652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:32:41.415715 extend-filesystems[1464]: Found loop4 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found loop5 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found loop6 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found loop7 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda1 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda2 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda3 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found usr Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda4 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda6 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda7 Dec 13 02:32:41.422327 extend-filesystems[1464]: Found sda9 Dec 13 02:32:41.422327 extend-filesystems[1464]: Checking size of /dev/sda9 Dec 13 02:32:41.419396 systemd-networkd[1394]: eth0: DHCPv4 address 78.47.218.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:32:41.456430 dbus-daemon[1462]: [system] SELinux support is enabled Dec 13 02:32:41.485083 extend-filesystems[1464]: Resized partition /dev/sda9 Dec 13 02:32:41.502711 update_engine[1475]: I20241213 02:32:41.455353 1475 main.cc:92] Flatcar Update Engine starting Dec 13 02:32:41.502711 update_engine[1475]: I20241213 02:32:41.501837 1475 update_check_scheduler.cc:74] Next update check in 9m21s Dec 13 02:32:41.422353 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:32:41.503091 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:32:41.423066 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:32:41.507250 jq[1479]: true Dec 13 02:32:41.424490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:32:41.510555 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 02:32:41.424849 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:32:41.510713 tar[1484]: linux-amd64/helm Dec 13 02:32:41.428800 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Dec 13 02:32:41.459127 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:32:41.470403 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:32:41.470430 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:32:41.473134 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:32:41.477867 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:32:41.477887 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:32:41.483784 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:32:41.485507 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:32:41.491220 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:32:41.501512 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:32:41.528139 jq[1497]: true Dec 13 02:32:41.558710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1302) Dec 13 02:32:41.600994 systemd-logind[1474]: New seat seat0. Dec 13 02:32:41.618214 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 02:32:41.618237 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:32:41.619508 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:32:41.657548 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 02:32:41.678821 extend-filesystems[1504]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:32:41.678821 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 02:32:41.678821 extend-filesystems[1504]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 02:32:41.690012 extend-filesystems[1464]: Resized filesystem in /dev/sda9 Dec 13 02:32:41.690012 extend-filesystems[1464]: Found sr0 Dec 13 02:32:41.684597 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:32:41.684787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:32:41.710349 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:32:41.705777 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:32:41.716000 systemd[1]: Starting sshkeys.service... Dec 13 02:32:41.742999 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:32:41.750038 containerd[1492]: time="2024-12-13T02:32:41.748027912Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:32:41.754366 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:32:41.779462 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:32:41.785513 coreos-metadata[1537]: Dec 13 02:32:41.785 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 02:32:41.787302 coreos-metadata[1537]: Dec 13 02:32:41.786 INFO Fetch successful Dec 13 02:32:41.788566 unknown[1537]: wrote ssh authorized keys file for user: core Dec 13 02:32:41.808970 containerd[1492]: time="2024-12-13T02:32:41.808677336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812202245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812225208Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812239846Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812395949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812411407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812484084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812496507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812656747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812670564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812681634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:41.813155 containerd[1492]: time="2024-12-13T02:32:41.812690481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.814127 containerd[1492]: time="2024-12-13T02:32:41.812770521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.814127 containerd[1492]: time="2024-12-13T02:32:41.812975405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:41.814127 containerd[1492]: time="2024-12-13T02:32:41.813069672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:41.814127 containerd[1492]: time="2024-12-13T02:32:41.813081614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:32:41.814668 containerd[1492]: time="2024-12-13T02:32:41.814217835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:32:41.814668 containerd[1492]: time="2024-12-13T02:32:41.814278679Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:32:41.819693 containerd[1492]: time="2024-12-13T02:32:41.819668546Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:32:41.819745 containerd[1492]: time="2024-12-13T02:32:41.819726504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:32:41.819767 containerd[1492]: time="2024-12-13T02:32:41.819746642Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:32:41.819791 containerd[1492]: time="2024-12-13T02:32:41.819764165Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:32:41.819828 containerd[1492]: time="2024-12-13T02:32:41.819811083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:32:41.819955 containerd[1492]: time="2024-12-13T02:32:41.819931439Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:32:41.821029 containerd[1492]: time="2024-12-13T02:32:41.821007506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:32:41.821227 containerd[1492]: time="2024-12-13T02:32:41.821170903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:32:41.821227 containerd[1492]: time="2024-12-13T02:32:41.821193906Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:32:41.821227 containerd[1492]: time="2024-12-13T02:32:41.821228711Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821241896Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821253007Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821262905Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821273996Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821285538Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821301 containerd[1492]: time="2024-12-13T02:32:41.821296047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821306397Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821316917Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821339659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821351421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821362452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821375677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.821393 containerd[1492]: time="2024-12-13T02:32:41.821386858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822108 containerd[1492]: time="2024-12-13T02:32:41.821397297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822147 containerd[1492]: time="2024-12-13T02:32:41.822116376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822147 containerd[1492]: time="2024-12-13T02:32:41.822131945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822147 containerd[1492]: time="2024-12-13T02:32:41.822143166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822194 containerd[1492]: time="2024-12-13T02:32:41.822156381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822194 containerd[1492]: time="2024-12-13T02:32:41.822166619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822194 containerd[1492]: time="2024-12-13T02:32:41.822177480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822194 containerd[1492]: time="2024-12-13T02:32:41.822187559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822284 containerd[1492]: time="2024-12-13T02:32:41.822200974Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:32:41.822284 containerd[1492]: time="2024-12-13T02:32:41.822223767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822284 containerd[1492]: time="2024-12-13T02:32:41.822235148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.822284 containerd[1492]: time="2024-12-13T02:32:41.822244245Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:32:41.822689 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:32:41.822889 containerd[1492]: time="2024-12-13T02:32:41.822684080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:32:41.822889 containerd[1492]: time="2024-12-13T02:32:41.822705510Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:32:41.822889 containerd[1492]: time="2024-12-13T02:32:41.822715299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:32:41.824179 containerd[1492]: time="2024-12-13T02:32:41.824158966Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:32:41.824179 containerd[1492]: time="2024-12-13T02:32:41.824177580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.824241 containerd[1492]: time="2024-12-13T02:32:41.824197227Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:32:41.824241 containerd[1492]: time="2024-12-13T02:32:41.824210602Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:32:41.824241 containerd[1492]: time="2024-12-13T02:32:41.824219529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:32:41.824483 containerd[1492]: time="2024-12-13T02:32:41.824432298Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:32:41.824620 containerd[1492]: time="2024-12-13T02:32:41.824497210Z" level=info msg="Connect containerd service" Dec 13 02:32:41.824620 containerd[1492]: time="2024-12-13T02:32:41.824525262Z" level=info msg="using legacy CRI server" Dec 13 02:32:41.824620 containerd[1492]: time="2024-12-13T02:32:41.824532085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:32:41.824620 containerd[1492]: time="2024-12-13T02:32:41.824597077Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:32:41.825761 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826069177Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826206415Z" level=info msg="Start subscribing containerd event" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826247181Z" level=info msg="Start recovering state" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826297305Z" level=info msg="Start event monitor" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826313195Z" level=info msg="Start snapshots syncer" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826321641Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:32:41.827561 containerd[1492]: time="2024-12-13T02:32:41.826327612Z" level=info msg="Start streaming server" Dec 13 02:32:41.828177 containerd[1492]: time="2024-12-13T02:32:41.828150701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:32:41.829129 containerd[1492]: time="2024-12-13T02:32:41.828220090Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:32:41.829129 containerd[1492]: time="2024-12-13T02:32:41.828315690Z" level=info msg="containerd successfully booted in 0.087884s" Dec 13 02:32:41.829068 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:32:41.832366 systemd[1]: Finished sshkeys.service. Dec 13 02:32:41.894886 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:32:41.917806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:32:41.928168 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:32:41.936357 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:32:41.936673 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:32:41.947429 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:32:41.960044 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:32:41.969397 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:32:41.973079 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:32:41.975424 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:32:42.121691 tar[1484]: linux-amd64/LICENSE Dec 13 02:32:42.121771 tar[1484]: linux-amd64/README.md Dec 13 02:32:42.136751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:32:42.355340 systemd-networkd[1394]: eth1: Gained IPv6LL Dec 13 02:32:42.356207 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Dec 13 02:32:42.360191 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:32:42.362041 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:32:42.374428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:42.394654 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:32:42.397786 coreos-metadata[1461]: Dec 13 02:32:42.397 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Dec 13 02:32:42.398734 coreos-metadata[1461]: Dec 13 02:32:42.398 INFO Fetch successful Dec 13 02:32:42.400247 coreos-metadata[1461]: Dec 13 02:32:42.398 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 02:32:42.400499 coreos-metadata[1461]: Dec 13 02:32:42.400 INFO Fetch successful Dec 13 02:32:42.424628 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:32:42.450690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:32:42.452599 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:32:42.483253 systemd-networkd[1394]: eth0: Gained IPv6LL Dec 13 02:32:42.483858 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Dec 13 02:32:43.127778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:43.128843 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:32:43.132223 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:32:43.137404 systemd[1]: Startup finished in 1.197s (kernel) + 5.295s (initrd) + 4.202s (userspace) = 10.696s. Dec 13 02:32:43.694308 kubelet[1592]: E1213 02:32:43.694244 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:43.697933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:43.698177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:53.948545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:32:53.957237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:32:54.090625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:32:54.094936 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:32:54.136247 kubelet[1612]: E1213 02:32:54.136180 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:54.142589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:54.142803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:04.199850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:33:04.205452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:04.329398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:04.334283 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:04.372020 kubelet[1628]: E1213 02:33:04.371954 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:04.375601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:04.375789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:12.704154 systemd-timesyncd[1424]: Contacted time server 144.76.138.23:123 (2.flatcar.pool.ntp.org). Dec 13 02:33:12.704227 systemd-timesyncd[1424]: Initial clock synchronization to Fri 2024-12-13 02:33:12.711933 UTC. Dec 13 02:33:14.449865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:33:14.456330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:14.582080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:14.598363 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:14.637025 kubelet[1645]: E1213 02:33:14.636928 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:14.640271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:14.640459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:24.700170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:33:24.711394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:24.868443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:24.872267 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:24.931165 kubelet[1661]: E1213 02:33:24.931082 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:24.934268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:24.934530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:26.467438 update_engine[1475]: I20241213 02:33:26.467316 1475 update_attempter.cc:509] Updating boot flags... Dec 13 02:33:26.520186 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1678) Dec 13 02:33:26.569201 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 13 02:33:26.614134 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1677) Dec 13 02:33:34.949701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:33:34.955559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:35.081310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:35.084076 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:35.122253 kubelet[1698]: E1213 02:33:35.122209 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:35.126278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:35.126460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:37.592217 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:33:37.604424 systemd[1]: Started sshd@0-78.47.218.196:22-147.75.109.163:42540.service - OpenSSH per-connection server daemon (147.75.109.163:42540). Dec 13 02:33:38.586081 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 42540 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:38.588822 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:38.598258 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:33:38.603328 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:33:38.605607 systemd-logind[1474]: New session 1 of user core. Dec 13 02:33:38.620352 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:33:38.626684 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:33:38.640333 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:38.743035 systemd[1710]: Queued start job for default target default.target. Dec 13 02:33:38.754293 systemd[1710]: Created slice app.slice - User Application Slice. Dec 13 02:33:38.754319 systemd[1710]: Reached target paths.target - Paths. Dec 13 02:33:38.754331 systemd[1710]: Reached target timers.target - Timers. Dec 13 02:33:38.755732 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:33:38.766651 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:33:38.766762 systemd[1710]: Reached target sockets.target - Sockets. Dec 13 02:33:38.766776 systemd[1710]: Reached target basic.target - Basic System. Dec 13 02:33:38.766815 systemd[1710]: Reached target default.target - Main User Target. Dec 13 02:33:38.766846 systemd[1710]: Startup finished in 118ms. Dec 13 02:33:38.766932 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:33:38.768700 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:33:39.463713 systemd[1]: Started sshd@1-78.47.218.196:22-147.75.109.163:42542.service - OpenSSH per-connection server daemon (147.75.109.163:42542). Dec 13 02:33:40.427728 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 42542 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:40.429288 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:40.433675 systemd-logind[1474]: New session 2 of user core. Dec 13 02:33:40.446289 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:33:41.104488 sshd[1721]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:41.108553 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:33:41.109422 systemd[1]: sshd@1-78.47.218.196:22-147.75.109.163:42542.service: Deactivated successfully. Dec 13 02:33:41.111595 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:33:41.112608 systemd-logind[1474]: Removed session 2. Dec 13 02:33:41.278270 systemd[1]: Started sshd@2-78.47.218.196:22-147.75.109.163:42558.service - OpenSSH per-connection server daemon (147.75.109.163:42558). Dec 13 02:33:42.261416 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 42558 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:42.263077 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:42.268056 systemd-logind[1474]: New session 3 of user core. Dec 13 02:33:42.277281 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:33:42.938489 sshd[1728]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:42.942464 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:33:42.943263 systemd[1]: sshd@2-78.47.218.196:22-147.75.109.163:42558.service: Deactivated successfully. Dec 13 02:33:42.945305 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:33:42.946254 systemd-logind[1474]: Removed session 3. Dec 13 02:33:43.106184 systemd[1]: Started sshd@3-78.47.218.196:22-147.75.109.163:42572.service - OpenSSH per-connection server daemon (147.75.109.163:42572). Dec 13 02:33:44.093354 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 42572 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:44.095000 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:44.100575 systemd-logind[1474]: New session 4 of user core. Dec 13 02:33:44.106263 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:33:44.774497 sshd[1735]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:44.778118 systemd[1]: sshd@3-78.47.218.196:22-147.75.109.163:42572.service: Deactivated successfully. Dec 13 02:33:44.780349 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:33:44.782442 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:33:44.783748 systemd-logind[1474]: Removed session 4. Dec 13 02:33:44.944087 systemd[1]: Started sshd@4-78.47.218.196:22-147.75.109.163:42574.service - OpenSSH per-connection server daemon (147.75.109.163:42574). Dec 13 02:33:45.199645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 02:33:45.204596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:45.330577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:45.334757 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:45.375664 kubelet[1752]: E1213 02:33:45.375620 1752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:45.379692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:45.379880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:45.915077 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 42574 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:45.916644 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:45.921656 systemd-logind[1474]: New session 5 of user core. Dec 13 02:33:45.934240 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:33:46.446173 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:33:46.446539 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:33:46.462972 sudo[1761]: pam_unix(sudo:session): session closed for user root Dec 13 02:33:46.621528 sshd[1742]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:46.625756 systemd[1]: sshd@4-78.47.218.196:22-147.75.109.163:42574.service: Deactivated successfully. Dec 13 02:33:46.627848 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:33:46.628531 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:33:46.629633 systemd-logind[1474]: Removed session 5. Dec 13 02:33:46.793305 systemd[1]: Started sshd@5-78.47.218.196:22-147.75.109.163:57382.service - OpenSSH per-connection server daemon (147.75.109.163:57382). Dec 13 02:33:47.757017 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 57382 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:47.760041 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:47.768982 systemd-logind[1474]: New session 6 of user core. Dec 13 02:33:47.779303 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:33:48.276863 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:33:48.277264 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:33:48.281710 sudo[1770]: pam_unix(sudo:session): session closed for user root Dec 13 02:33:48.288304 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:33:48.288833 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:33:48.305500 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:33:48.307457 auditctl[1773]: No rules Dec 13 02:33:48.307985 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:33:48.308269 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:33:48.311010 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:33:48.352092 augenrules[1791]: No rules Dec 13 02:33:48.353009 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:33:48.355536 sudo[1769]: pam_unix(sudo:session): session closed for user root Dec 13 02:33:48.513930 sshd[1766]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:48.517032 systemd[1]: sshd@5-78.47.218.196:22-147.75.109.163:57382.service: Deactivated successfully. Dec 13 02:33:48.519087 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:33:48.520979 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:33:48.522007 systemd-logind[1474]: Removed session 6. Dec 13 02:33:48.690797 systemd[1]: Started sshd@6-78.47.218.196:22-147.75.109.163:57398.service - OpenSSH per-connection server daemon (147.75.109.163:57398). Dec 13 02:33:49.674320 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 57398 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:33:49.675854 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:33:49.679973 systemd-logind[1474]: New session 7 of user core. Dec 13 02:33:49.695312 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:33:50.195584 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:33:50.195975 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:33:50.443300 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:33:50.443439 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:33:50.675815 dockerd[1819]: time="2024-12-13T02:33:50.675743942Z" level=info msg="Starting up" Dec 13 02:33:50.766683 dockerd[1819]: time="2024-12-13T02:33:50.766628417Z" level=info msg="Loading containers: start." Dec 13 02:33:50.863137 kernel: Initializing XFRM netlink socket Dec 13 02:33:50.936284 systemd-networkd[1394]: docker0: Link UP Dec 13 02:33:50.954492 dockerd[1819]: time="2024-12-13T02:33:50.954443772Z" level=info msg="Loading containers: done." Dec 13 02:33:50.968010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck888908650-merged.mount: Deactivated successfully. Dec 13 02:33:50.970748 dockerd[1819]: time="2024-12-13T02:33:50.970699948Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:33:50.970828 dockerd[1819]: time="2024-12-13T02:33:50.970799468Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:33:50.970958 dockerd[1819]: time="2024-12-13T02:33:50.970925882Z" level=info msg="Daemon has completed initialization" Dec 13 02:33:50.997604 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:33:50.997783 dockerd[1819]: time="2024-12-13T02:33:50.997726712Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:33:52.101136 containerd[1492]: time="2024-12-13T02:33:52.100833183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 02:33:52.768321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017127901.mount: Deactivated successfully. Dec 13 02:33:55.451467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 02:33:55.460173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:33:55.592329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:33:55.596801 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:33:55.636152 kubelet[2026]: E1213 02:33:55.636074 2026 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:33:55.641736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:33:55.641973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:33:55.783775 containerd[1492]: time="2024-12-13T02:33:55.783639476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:55.785021 containerd[1492]: time="2024-12-13T02:33:55.784856769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675734" Dec 13 02:33:55.786018 containerd[1492]: time="2024-12-13T02:33:55.785653981Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:55.788277 containerd[1492]: time="2024-12-13T02:33:55.788247890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:55.789378 containerd[1492]: time="2024-12-13T02:33:55.789344372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.688472596s" Dec 13 02:33:55.789458 containerd[1492]: time="2024-12-13T02:33:55.789443210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 02:33:55.808957 containerd[1492]: time="2024-12-13T02:33:55.808905008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 02:33:58.045714 containerd[1492]: time="2024-12-13T02:33:58.045644420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:58.046631 containerd[1492]: time="2024-12-13T02:33:58.046593776Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606429" Dec 13 02:33:58.047531 containerd[1492]: time="2024-12-13T02:33:58.047493899Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:58.050940 containerd[1492]: time="2024-12-13T02:33:58.049925724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:58.050940 containerd[1492]: time="2024-12-13T02:33:58.050837640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.241739101s" Dec 13 02:33:58.050940 containerd[1492]: time="2024-12-13T02:33:58.050862146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 02:33:58.071878 containerd[1492]: time="2024-12-13T02:33:58.071821970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 02:33:59.682527 containerd[1492]: time="2024-12-13T02:33:59.682454618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:59.683525 containerd[1492]: time="2024-12-13T02:33:59.683380197Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783055" Dec 13 02:33:59.684328 containerd[1492]: time="2024-12-13T02:33:59.684300397Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:59.687241 containerd[1492]: time="2024-12-13T02:33:59.687175791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:33:59.690726 containerd[1492]: time="2024-12-13T02:33:59.688494598Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.61648432s" Dec 13 02:33:59.690726 containerd[1492]: time="2024-12-13T02:33:59.688526539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 02:33:59.711743 containerd[1492]: time="2024-12-13T02:33:59.711698602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:34:01.125221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780483561.mount: Deactivated successfully. Dec 13 02:34:01.381810 containerd[1492]: time="2024-12-13T02:34:01.381681923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:01.382674 containerd[1492]: time="2024-12-13T02:34:01.382638870Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057496" Dec 13 02:34:01.383562 containerd[1492]: time="2024-12-13T02:34:01.383522387Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:01.385399 containerd[1492]: time="2024-12-13T02:34:01.385378710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:01.386012 containerd[1492]: time="2024-12-13T02:34:01.385866976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.674130732s" Dec 13 02:34:01.386012 containerd[1492]: time="2024-12-13T02:34:01.385895270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:34:01.404887 containerd[1492]: time="2024-12-13T02:34:01.404840089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:34:02.001456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264434500.mount: Deactivated successfully. Dec 13 02:34:02.615133 containerd[1492]: time="2024-12-13T02:34:02.614270457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:02.615133 containerd[1492]: time="2024-12-13T02:34:02.614789452Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 02:34:02.616192 containerd[1492]: time="2024-12-13T02:34:02.616134563Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:02.618744 containerd[1492]: time="2024-12-13T02:34:02.618340194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:02.619286 containerd[1492]: time="2024-12-13T02:34:02.619259979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.214380544s" Dec 13 02:34:02.619326 containerd[1492]: time="2024-12-13T02:34:02.619288032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:34:02.640478 containerd[1492]: time="2024-12-13T02:34:02.640446312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:34:03.146004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161898484.mount: Deactivated successfully. Dec 13 02:34:03.151501 containerd[1492]: time="2024-12-13T02:34:03.151422916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:03.152255 containerd[1492]: time="2024-12-13T02:34:03.152212101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 02:34:03.153203 containerd[1492]: time="2024-12-13T02:34:03.153156651Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:03.156146 containerd[1492]: time="2024-12-13T02:34:03.155991243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:03.157201 containerd[1492]: time="2024-12-13T02:34:03.156724522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 516.244025ms" Dec 13 02:34:03.157201 containerd[1492]: time="2024-12-13T02:34:03.156752275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:34:03.183954 containerd[1492]: time="2024-12-13T02:34:03.183866813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 02:34:03.769286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256637280.mount: Deactivated successfully. Dec 13 02:34:05.186243 containerd[1492]: time="2024-12-13T02:34:05.186173920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:05.187433 containerd[1492]: time="2024-12-13T02:34:05.187394561Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Dec 13 02:34:05.188449 containerd[1492]: time="2024-12-13T02:34:05.188409792Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:05.191132 containerd[1492]: time="2024-12-13T02:34:05.190813050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:05.191817 containerd[1492]: time="2024-12-13T02:34:05.191648631Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.007743836s" Dec 13 02:34:05.191817 containerd[1492]: time="2024-12-13T02:34:05.191675423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 02:34:05.700170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 02:34:05.707371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:05.860782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:05.867325 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:34:05.909784 kubelet[2190]: E1213 02:34:05.909703 2190 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:34:05.913683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:34:05.913862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:34:07.976139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:07.999477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:08.036511 systemd[1]: Reloading requested from client PID 2258 ('systemctl') (unit session-7.scope)... Dec 13 02:34:08.036743 systemd[1]: Reloading... Dec 13 02:34:08.166152 zram_generator::config[2301]: No configuration found. Dec 13 02:34:08.256813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:34:08.322405 systemd[1]: Reloading finished in 285 ms. Dec 13 02:34:08.367591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:08.374494 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:34:08.375207 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:08.375463 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:34:08.375672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:08.378258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:08.523236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:08.523402 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:34:08.558182 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:34:08.558488 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:34:08.558530 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:34:08.559909 kubelet[2355]: I1213 02:34:08.559877 2355 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:34:08.873503 kubelet[2355]: I1213 02:34:08.873087 2355 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:34:08.873647 kubelet[2355]: I1213 02:34:08.873632 2355 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:34:08.873903 kubelet[2355]: I1213 02:34:08.873889 2355 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:34:08.894585 kubelet[2355]: I1213 02:34:08.894531 2355 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:34:08.897440 kubelet[2355]: E1213 02:34:08.897409 2355 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.218.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.909610 kubelet[2355]: I1213 02:34:08.909590 2355 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:34:08.909822 kubelet[2355]: I1213 02:34:08.909785 2355 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:34:08.911018 kubelet[2355]: I1213 02:34:08.909811 2355 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-5cf67d135c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:34:08.911117 kubelet[2355]: I1213 02:34:08.911027 2355 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:34:08.911117 kubelet[2355]: I1213 02:34:08.911037 2355 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:34:08.911209 kubelet[2355]: I1213 02:34:08.911164 2355 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:34:08.913436 kubelet[2355]: I1213 02:34:08.913253 2355 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:34:08.913436 kubelet[2355]: I1213 02:34:08.913272 2355 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:34:08.913436 kubelet[2355]: I1213 02:34:08.913293 2355 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:34:08.913436 kubelet[2355]: I1213 02:34:08.913311 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:34:08.915463 kubelet[2355]: W1213 02:34:08.915277 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-5cf67d135c&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.915463 kubelet[2355]: E1213 02:34:08.915331 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-5cf67d135c&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.915463 kubelet[2355]: W1213 02:34:08.915373 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.915463 kubelet[2355]: E1213 02:34:08.915403 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.915934 kubelet[2355]: I1213 02:34:08.915899 2355 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:34:08.917741 kubelet[2355]: I1213 02:34:08.917610 2355 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:34:08.917741 kubelet[2355]: W1213 02:34:08.917658 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:34:08.921495 kubelet[2355]: I1213 02:34:08.921376 2355 server.go:1264] "Started kubelet" Dec 13 02:34:08.922434 kubelet[2355]: I1213 02:34:08.922385 2355 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:34:08.923298 kubelet[2355]: I1213 02:34:08.923275 2355 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:34:08.927528 kubelet[2355]: I1213 02:34:08.927449 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:34:08.927724 kubelet[2355]: I1213 02:34:08.927687 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:34:08.927962 kubelet[2355]: I1213 02:34:08.927924 2355 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:34:08.930028 kubelet[2355]: E1213 02:34:08.929955 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.218.196:6443/api/v1/namespaces/default/events\": dial tcp 78.47.218.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-5cf67d135c.18109be1662603e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-5cf67d135c,UID:ci-4081-2-1-b-5cf67d135c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-5cf67d135c,},FirstTimestamp:2024-12-13 02:34:08.921355236 +0000 UTC m=+0.393792420,LastTimestamp:2024-12-13 02:34:08.921355236 +0000 UTC m=+0.393792420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-5cf67d135c,}" Dec 13 02:34:08.933078 kubelet[2355]: I1213 02:34:08.933061 2355 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:34:08.934496 kubelet[2355]: E1213 02:34:08.934262 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-5cf67d135c?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="200ms" Dec 13 02:34:08.935190 kubelet[2355]: I1213 02:34:08.935174 2355 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:34:08.935318 kubelet[2355]: I1213 02:34:08.935303 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:34:08.936550 kubelet[2355]: I1213 02:34:08.936531 2355 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:34:08.936643 kubelet[2355]: I1213 02:34:08.936631 2355 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:34:08.936786 kubelet[2355]: E1213 02:34:08.936771 2355 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:34:08.937409 kubelet[2355]: I1213 02:34:08.937395 2355 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:34:08.945143 kubelet[2355]: I1213 02:34:08.944258 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:34:08.945348 kubelet[2355]: I1213 02:34:08.945329 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:34:08.945378 kubelet[2355]: I1213 02:34:08.945353 2355 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:34:08.945378 kubelet[2355]: I1213 02:34:08.945366 2355 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:34:08.945416 kubelet[2355]: E1213 02:34:08.945396 2355 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:34:08.952386 kubelet[2355]: W1213 02:34:08.952351 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.952437 kubelet[2355]: E1213 02:34:08.952390 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.952462 kubelet[2355]: W1213 02:34:08.952431 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.952462 kubelet[2355]: E1213 02:34:08.952451 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:08.968158 kubelet[2355]: I1213 02:34:08.967861 2355 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:34:08.968158 kubelet[2355]: I1213 02:34:08.967885 2355 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:34:08.968158 kubelet[2355]: I1213 02:34:08.967908 2355 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:34:08.969815 kubelet[2355]: I1213 02:34:08.969802 2355 policy_none.go:49] "None policy: Start" Dec 13 02:34:08.970274 kubelet[2355]: I1213 02:34:08.970262 2355 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:34:08.970346 kubelet[2355]: I1213 02:34:08.970337 2355 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:34:08.975909 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:34:08.994713 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:34:08.997789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:34:09.012030 kubelet[2355]: I1213 02:34:09.012001 2355 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:34:09.012325 kubelet[2355]: I1213 02:34:09.012210 2355 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:34:09.012325 kubelet[2355]: I1213 02:34:09.012314 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:34:09.014080 kubelet[2355]: E1213 02:34:09.014053 2355 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-b-5cf67d135c\" not found" Dec 13 02:34:09.035085 kubelet[2355]: I1213 02:34:09.035053 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.035491 kubelet[2355]: E1213 02:34:09.035448 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.046481 kubelet[2355]: I1213 02:34:09.046451 2355 topology_manager.go:215] "Topology Admit Handler" podUID="a24882d8c9d5e02885e19d9449b02fd2" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.047814 kubelet[2355]: I1213 02:34:09.047688 2355 topology_manager.go:215] "Topology Admit Handler" podUID="4e83eebfc548b1926fbce22b3ab61a8b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.049030 kubelet[2355]: I1213 02:34:09.048855 2355 topology_manager.go:215] "Topology Admit Handler" podUID="0e12e0f944d961a5ceeec0d491d24a3d" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.054550 systemd[1]: Created slice kubepods-burstable-poda24882d8c9d5e02885e19d9449b02fd2.slice - libcontainer container kubepods-burstable-poda24882d8c9d5e02885e19d9449b02fd2.slice. Dec 13 02:34:09.063978 systemd[1]: Created slice kubepods-burstable-pod4e83eebfc548b1926fbce22b3ab61a8b.slice - libcontainer container kubepods-burstable-pod4e83eebfc548b1926fbce22b3ab61a8b.slice. Dec 13 02:34:09.074931 systemd[1]: Created slice kubepods-burstable-pod0e12e0f944d961a5ceeec0d491d24a3d.slice - libcontainer container kubepods-burstable-pod0e12e0f944d961a5ceeec0d491d24a3d.slice. Dec 13 02:34:09.135545 kubelet[2355]: E1213 02:34:09.135388 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-5cf67d135c?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="400ms" Dec 13 02:34:09.237988 kubelet[2355]: I1213 02:34:09.237798 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.237988 kubelet[2355]: I1213 02:34:09.237835 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.237988 kubelet[2355]: I1213 02:34:09.237863 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.237988 kubelet[2355]: I1213 02:34:09.237879 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.237988 kubelet[2355]: I1213 02:34:09.237895 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.238360 kubelet[2355]: I1213 02:34:09.237908 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.238360 kubelet[2355]: I1213 02:34:09.237925 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.238360 kubelet[2355]: I1213 02:34:09.237941 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.238360 kubelet[2355]: I1213 02:34:09.237959 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e12e0f944d961a5ceeec0d491d24a3d-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-5cf67d135c\" (UID: \"0e12e0f944d961a5ceeec0d491d24a3d\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.239071 kubelet[2355]: I1213 02:34:09.238637 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.239071 kubelet[2355]: E1213 02:34:09.238930 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.363602 containerd[1492]: time="2024-12-13T02:34:09.363546204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-5cf67d135c,Uid:a24882d8c9d5e02885e19d9449b02fd2,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:09.378524 containerd[1492]: time="2024-12-13T02:34:09.378479677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-5cf67d135c,Uid:4e83eebfc548b1926fbce22b3ab61a8b,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:09.378738 containerd[1492]: time="2024-12-13T02:34:09.378488804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-5cf67d135c,Uid:0e12e0f944d961a5ceeec0d491d24a3d,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:09.537116 kubelet[2355]: E1213 02:34:09.536162 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-5cf67d135c?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="800ms" Dec 13 02:34:09.641039 kubelet[2355]: I1213 02:34:09.640986 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.641456 kubelet[2355]: E1213 02:34:09.641289 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:09.842209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889132319.mount: Deactivated successfully. Dec 13 02:34:09.848379 containerd[1492]: time="2024-12-13T02:34:09.848304693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:34:09.849273 containerd[1492]: time="2024-12-13T02:34:09.849236083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:34:09.850383 containerd[1492]: time="2024-12-13T02:34:09.850299831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:34:09.850383 containerd[1492]: time="2024-12-13T02:34:09.850357872Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 02:34:09.853126 containerd[1492]: time="2024-12-13T02:34:09.851380322Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:34:09.853126 containerd[1492]: time="2024-12-13T02:34:09.852313424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:34:09.854498 containerd[1492]: time="2024-12-13T02:34:09.854460449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:34:09.856699 containerd[1492]: time="2024-12-13T02:34:09.856660553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.971ms" Dec 13 02:34:09.857625 containerd[1492]: time="2024-12-13T02:34:09.857578157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:34:09.859681 containerd[1492]: time="2024-12-13T02:34:09.859616326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.96885ms" Dec 13 02:34:09.861153 containerd[1492]: time="2024-12-13T02:34:09.861083727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.321767ms" Dec 13 02:34:09.972126 kubelet[2355]: W1213 02:34:09.971081 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-5cf67d135c&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:09.972126 kubelet[2355]: E1213 02:34:09.971151 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.218.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-5cf67d135c&limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:09.983005 containerd[1492]: time="2024-12-13T02:34:09.982799490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:09.983005 containerd[1492]: time="2024-12-13T02:34:09.982845787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:09.983005 containerd[1492]: time="2024-12-13T02:34:09.982858142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:09.983005 containerd[1492]: time="2024-12-13T02:34:09.982923144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:09.983665 containerd[1492]: time="2024-12-13T02:34:09.983597096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:09.988166 containerd[1492]: time="2024-12-13T02:34:09.985283431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:09.988166 containerd[1492]: time="2024-12-13T02:34:09.985302747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:09.988166 containerd[1492]: time="2024-12-13T02:34:09.985372539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:09.990169 containerd[1492]: time="2024-12-13T02:34:09.989641392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:09.990318 containerd[1492]: time="2024-12-13T02:34:09.990275258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:09.992116 containerd[1492]: time="2024-12-13T02:34:09.992041995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:09.992360 containerd[1492]: time="2024-12-13T02:34:09.992312727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:10.007289 systemd[1]: Started cri-containerd-5cea7a92cc3a01b2583bcf2542767e82a0162ad4121a4230c444b5eeadc97a81.scope - libcontainer container 5cea7a92cc3a01b2583bcf2542767e82a0162ad4121a4230c444b5eeadc97a81. Dec 13 02:34:10.020953 systemd[1]: Started cri-containerd-41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e.scope - libcontainer container 41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e. Dec 13 02:34:10.025786 systemd[1]: Started cri-containerd-2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13.scope - libcontainer container 2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13. Dec 13 02:34:10.079817 containerd[1492]: time="2024-12-13T02:34:10.079525544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-5cf67d135c,Uid:a24882d8c9d5e02885e19d9449b02fd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cea7a92cc3a01b2583bcf2542767e82a0162ad4121a4230c444b5eeadc97a81\"" Dec 13 02:34:10.089209 containerd[1492]: time="2024-12-13T02:34:10.089061605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-5cf67d135c,Uid:4e83eebfc548b1926fbce22b3ab61a8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e\"" Dec 13 02:34:10.096136 containerd[1492]: time="2024-12-13T02:34:10.094904094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-5cf67d135c,Uid:0e12e0f944d961a5ceeec0d491d24a3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13\"" Dec 13 02:34:10.096136 containerd[1492]: time="2024-12-13T02:34:10.095138626Z" level=info msg="CreateContainer within sandbox \"41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:34:10.120121 containerd[1492]: time="2024-12-13T02:34:10.120020035Z" level=info msg="CreateContainer within sandbox \"5cea7a92cc3a01b2583bcf2542767e82a0162ad4121a4230c444b5eeadc97a81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:34:10.122278 containerd[1492]: time="2024-12-13T02:34:10.122162108Z" level=info msg="CreateContainer within sandbox \"41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e\"" Dec 13 02:34:10.122552 containerd[1492]: time="2024-12-13T02:34:10.122532578Z" level=info msg="CreateContainer within sandbox \"2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:34:10.123022 containerd[1492]: time="2024-12-13T02:34:10.122978549Z" level=info msg="StartContainer for \"8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e\"" Dec 13 02:34:10.137491 containerd[1492]: time="2024-12-13T02:34:10.137289835Z" level=info msg="CreateContainer within sandbox \"2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57\"" Dec 13 02:34:10.138951 containerd[1492]: time="2024-12-13T02:34:10.138220472Z" level=info msg="StartContainer for \"c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57\"" Dec 13 02:34:10.142470 containerd[1492]: time="2024-12-13T02:34:10.142448573Z" level=info msg="CreateContainer within sandbox \"5cea7a92cc3a01b2583bcf2542767e82a0162ad4121a4230c444b5eeadc97a81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05a43379f096506098bd26c3e5f4becbf32a19f51f4c904f567f0f2c46be2810\"" Dec 13 02:34:10.143003 containerd[1492]: time="2024-12-13T02:34:10.142967051Z" level=info msg="StartContainer for \"05a43379f096506098bd26c3e5f4becbf32a19f51f4c904f567f0f2c46be2810\"" Dec 13 02:34:10.165234 systemd[1]: Started cri-containerd-8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e.scope - libcontainer container 8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e. Dec 13 02:34:10.177353 systemd[1]: Started cri-containerd-c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57.scope - libcontainer container c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57. Dec 13 02:34:10.190234 systemd[1]: Started cri-containerd-05a43379f096506098bd26c3e5f4becbf32a19f51f4c904f567f0f2c46be2810.scope - libcontainer container 05a43379f096506098bd26c3e5f4becbf32a19f51f4c904f567f0f2c46be2810. Dec 13 02:34:10.234820 containerd[1492]: time="2024-12-13T02:34:10.234772241Z" level=info msg="StartContainer for \"8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e\" returns successfully" Dec 13 02:34:10.235357 kubelet[2355]: W1213 02:34:10.235289 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:10.235441 kubelet[2355]: E1213 02:34:10.235366 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.218.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:10.247027 containerd[1492]: time="2024-12-13T02:34:10.246742467Z" level=info msg="StartContainer for \"05a43379f096506098bd26c3e5f4becbf32a19f51f4c904f567f0f2c46be2810\" returns successfully" Dec 13 02:34:10.259086 containerd[1492]: time="2024-12-13T02:34:10.259026806Z" level=info msg="StartContainer for \"c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57\" returns successfully" Dec 13 02:34:10.308129 kubelet[2355]: E1213 02:34:10.307305 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.218.196:6443/api/v1/namespaces/default/events\": dial tcp 78.47.218.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-5cf67d135c.18109be1662603e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-5cf67d135c,UID:ci-4081-2-1-b-5cf67d135c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-5cf67d135c,},FirstTimestamp:2024-12-13 02:34:08.921355236 +0000 UTC m=+0.393792420,LastTimestamp:2024-12-13 02:34:08.921355236 +0000 UTC m=+0.393792420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-5cf67d135c,}" Dec 13 02:34:10.337068 kubelet[2355]: E1213 02:34:10.337020 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.218.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-5cf67d135c?timeout=10s\": dial tcp 78.47.218.196:6443: connect: connection refused" interval="1.6s" Dec 13 02:34:10.360906 kubelet[2355]: W1213 02:34:10.360670 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:10.360906 kubelet[2355]: E1213 02:34:10.360736 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.218.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:10.444849 kubelet[2355]: I1213 02:34:10.444810 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:10.446124 kubelet[2355]: E1213 02:34:10.445122 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.218.196:6443/api/v1/nodes\": dial tcp 78.47.218.196:6443: connect: connection refused" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:10.489842 kubelet[2355]: W1213 02:34:10.489777 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:10.489842 kubelet[2355]: E1213 02:34:10.489838 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.218.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.218.196:6443: connect: connection refused Dec 13 02:34:11.940120 kubelet[2355]: E1213 02:34:11.940057 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-b-5cf67d135c\" not found" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:12.028133 kubelet[2355]: E1213 02:34:12.028076 2355 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-b-5cf67d135c" not found Dec 13 02:34:12.048768 kubelet[2355]: I1213 02:34:12.048686 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:12.056157 kubelet[2355]: I1213 02:34:12.056127 2355 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:12.916779 kubelet[2355]: I1213 02:34:12.916580 2355 apiserver.go:52] "Watching apiserver" Dec 13 02:34:12.936966 kubelet[2355]: I1213 02:34:12.936941 2355 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:34:13.465482 systemd[1]: Reloading requested from client PID 2628 ('systemctl') (unit session-7.scope)... Dec 13 02:34:13.465500 systemd[1]: Reloading... Dec 13 02:34:13.569128 zram_generator::config[2671]: No configuration found. Dec 13 02:34:13.662654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:34:13.736630 systemd[1]: Reloading finished in 270 ms. Dec 13 02:34:13.775261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:13.786631 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:34:13.786871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:13.793768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:34:13.920966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:34:13.925486 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:34:13.978956 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:34:13.979325 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:34:13.979366 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:34:13.980663 kubelet[2719]: I1213 02:34:13.980629 2719 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:34:13.985004 kubelet[2719]: I1213 02:34:13.984964 2719 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:34:13.985073 kubelet[2719]: I1213 02:34:13.985063 2719 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:34:13.985273 kubelet[2719]: I1213 02:34:13.985262 2719 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:34:13.988176 kubelet[2719]: I1213 02:34:13.987723 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:34:13.989268 kubelet[2719]: I1213 02:34:13.989245 2719 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:34:13.996140 kubelet[2719]: I1213 02:34:13.995739 2719 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:34:13.996140 kubelet[2719]: I1213 02:34:13.995935 2719 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:34:13.996140 kubelet[2719]: I1213 02:34:13.995980 2719 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-5cf67d135c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:34:13.996140 kubelet[2719]: I1213 02:34:13.996119 2719 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:34:13.996365 kubelet[2719]: I1213 02:34:13.996143 2719 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:34:13.996957 kubelet[2719]: I1213 02:34:13.996927 2719 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:34:13.997048 kubelet[2719]: I1213 02:34:13.997025 2719 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:34:13.997048 kubelet[2719]: I1213 02:34:13.997044 2719 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:34:13.997139 kubelet[2719]: I1213 02:34:13.997060 2719 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:34:13.997139 kubelet[2719]: I1213 02:34:13.997076 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:34:13.997884 kubelet[2719]: I1213 02:34:13.997846 2719 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:34:14.000472 kubelet[2719]: I1213 02:34:14.000452 2719 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:34:14.000847 kubelet[2719]: I1213 02:34:14.000779 2719 server.go:1264] "Started kubelet" Dec 13 02:34:14.004119 kubelet[2719]: I1213 02:34:14.002892 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:34:14.005969 kubelet[2719]: I1213 02:34:14.004932 2719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:34:14.007482 kubelet[2719]: I1213 02:34:14.006755 2719 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:34:14.008592 kubelet[2719]: I1213 02:34:14.008548 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:34:14.008829 kubelet[2719]: I1213 02:34:14.008815 2719 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:34:14.012567 kubelet[2719]: I1213 02:34:14.012522 2719 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:34:14.013959 kubelet[2719]: I1213 02:34:14.013945 2719 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:34:14.014077 kubelet[2719]: I1213 02:34:14.014060 2719 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:34:14.016578 kubelet[2719]: I1213 02:34:14.015853 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:34:14.016901 kubelet[2719]: I1213 02:34:14.016871 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:34:14.016901 kubelet[2719]: I1213 02:34:14.016898 2719 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:34:14.016963 kubelet[2719]: I1213 02:34:14.016913 2719 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:34:14.016963 kubelet[2719]: E1213 02:34:14.016954 2719 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:34:14.025857 kubelet[2719]: I1213 02:34:14.024421 2719 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:34:14.025857 kubelet[2719]: I1213 02:34:14.024499 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:34:14.028282 kubelet[2719]: E1213 02:34:14.028261 2719 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:34:14.029912 kubelet[2719]: I1213 02:34:14.029888 2719 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:34:14.072506 kubelet[2719]: I1213 02:34:14.072475 2719 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:34:14.072506 kubelet[2719]: I1213 02:34:14.072491 2719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:34:14.072506 kubelet[2719]: I1213 02:34:14.072507 2719 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:34:14.072676 kubelet[2719]: I1213 02:34:14.072627 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:34:14.072676 kubelet[2719]: I1213 02:34:14.072635 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:34:14.072676 kubelet[2719]: I1213 02:34:14.072671 2719 policy_none.go:49] "None policy: Start" Dec 13 02:34:14.073202 kubelet[2719]: I1213 02:34:14.073187 2719 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:34:14.073247 kubelet[2719]: I1213 02:34:14.073206 2719 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:34:14.073347 kubelet[2719]: I1213 02:34:14.073324 2719 state_mem.go:75] "Updated machine memory state" Dec 13 02:34:14.077227 kubelet[2719]: I1213 02:34:14.077208 2719 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:34:14.077697 kubelet[2719]: I1213 02:34:14.077569 2719 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:34:14.078764 kubelet[2719]: I1213 02:34:14.077925 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:34:14.116171 kubelet[2719]: I1213 02:34:14.116139 2719 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.117803 kubelet[2719]: I1213 02:34:14.117757 2719 topology_manager.go:215] "Topology Admit Handler" podUID="a24882d8c9d5e02885e19d9449b02fd2" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.117908 kubelet[2719]: I1213 02:34:14.117828 2719 topology_manager.go:215] "Topology Admit Handler" podUID="4e83eebfc548b1926fbce22b3ab61a8b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.117908 kubelet[2719]: I1213 02:34:14.117875 2719 topology_manager.go:215] "Topology Admit Handler" podUID="0e12e0f944d961a5ceeec0d491d24a3d" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.124972 kubelet[2719]: I1213 02:34:14.124532 2719 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.125213 kubelet[2719]: I1213 02:34:14.125124 2719 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215507 kubelet[2719]: I1213 02:34:14.215439 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215507 kubelet[2719]: I1213 02:34:14.215477 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215507 kubelet[2719]: I1213 02:34:14.215498 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215507 kubelet[2719]: I1213 02:34:14.215515 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215806 kubelet[2719]: I1213 02:34:14.215530 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215806 kubelet[2719]: I1213 02:34:14.215547 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215806 kubelet[2719]: I1213 02:34:14.215562 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e12e0f944d961a5ceeec0d491d24a3d-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-5cf67d135c\" (UID: \"0e12e0f944d961a5ceeec0d491d24a3d\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215806 kubelet[2719]: I1213 02:34:14.215580 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a24882d8c9d5e02885e19d9449b02fd2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" (UID: \"a24882d8c9d5e02885e19d9449b02fd2\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.215806 kubelet[2719]: I1213 02:34:14.215610 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e83eebfc548b1926fbce22b3ab61a8b-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-5cf67d135c\" (UID: \"4e83eebfc548b1926fbce22b3ab61a8b\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:14.997957 kubelet[2719]: I1213 02:34:14.997906 2719 apiserver.go:52] "Watching apiserver" Dec 13 02:34:15.015191 kubelet[2719]: I1213 02:34:15.014049 2719 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:34:15.084502 kubelet[2719]: E1213 02:34:15.084457 2719 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-b-5cf67d135c\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" Dec 13 02:34:15.138889 kubelet[2719]: I1213 02:34:15.138823 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-b-5cf67d135c" podStartSLOduration=1.138803485 podStartE2EDuration="1.138803485s" podCreationTimestamp="2024-12-13 02:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:34:15.11131451 +0000 UTC m=+1.171797906" watchObservedRunningTime="2024-12-13 02:34:15.138803485 +0000 UTC m=+1.199286881" Dec 13 02:34:15.160117 kubelet[2719]: I1213 02:34:15.158038 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-5cf67d135c" podStartSLOduration=1.158020694 podStartE2EDuration="1.158020694s" podCreationTimestamp="2024-12-13 02:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:34:15.139580149 +0000 UTC m=+1.200063545" watchObservedRunningTime="2024-12-13 02:34:15.158020694 +0000 UTC m=+1.218504081" Dec 13 02:34:15.195122 kubelet[2719]: I1213 02:34:15.193985 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-b-5cf67d135c" podStartSLOduration=1.193965542 podStartE2EDuration="1.193965542s" podCreationTimestamp="2024-12-13 02:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:34:15.158756772 +0000 UTC m=+1.219240158" watchObservedRunningTime="2024-12-13 02:34:15.193965542 +0000 UTC m=+1.254448928" Dec 13 02:34:18.966155 sudo[1802]: pam_unix(sudo:session): session closed for user root Dec 13 02:34:19.125556 sshd[1799]: pam_unix(sshd:session): session closed for user core Dec 13 02:34:19.128625 systemd[1]: sshd@6-78.47.218.196:22-147.75.109.163:57398.service: Deactivated successfully. Dec 13 02:34:19.130272 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:34:19.130492 systemd[1]: session-7.scope: Consumed 4.205s CPU time, 189.5M memory peak, 0B memory swap peak. Dec 13 02:34:19.131525 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:34:19.132832 systemd-logind[1474]: Removed session 7. Dec 13 02:34:27.892981 kubelet[2719]: I1213 02:34:27.892848 2719 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:34:27.893914 containerd[1492]: time="2024-12-13T02:34:27.893821307Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:34:27.894655 kubelet[2719]: I1213 02:34:27.894579 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:34:28.734365 kubelet[2719]: I1213 02:34:28.733474 2719 topology_manager.go:215] "Topology Admit Handler" podUID="3c760937-0b14-418c-b5e4-372134fd77cd" podNamespace="kube-system" podName="kube-proxy-vxnjm" Dec 13 02:34:28.746447 systemd[1]: Created slice kubepods-besteffort-pod3c760937_0b14_418c_b5e4_372134fd77cd.slice - libcontainer container kubepods-besteffort-pod3c760937_0b14_418c_b5e4_372134fd77cd.slice. Dec 13 02:34:28.807733 kubelet[2719]: I1213 02:34:28.807584 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c760937-0b14-418c-b5e4-372134fd77cd-xtables-lock\") pod \"kube-proxy-vxnjm\" (UID: \"3c760937-0b14-418c-b5e4-372134fd77cd\") " pod="kube-system/kube-proxy-vxnjm" Dec 13 02:34:28.807733 kubelet[2719]: I1213 02:34:28.807646 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c760937-0b14-418c-b5e4-372134fd77cd-kube-proxy\") pod \"kube-proxy-vxnjm\" (UID: \"3c760937-0b14-418c-b5e4-372134fd77cd\") " pod="kube-system/kube-proxy-vxnjm" Dec 13 02:34:28.807733 kubelet[2719]: I1213 02:34:28.807675 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c760937-0b14-418c-b5e4-372134fd77cd-lib-modules\") pod \"kube-proxy-vxnjm\" (UID: \"3c760937-0b14-418c-b5e4-372134fd77cd\") " pod="kube-system/kube-proxy-vxnjm" Dec 13 02:34:28.807733 kubelet[2719]: I1213 02:34:28.807701 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz88w\" (UniqueName: \"kubernetes.io/projected/3c760937-0b14-418c-b5e4-372134fd77cd-kube-api-access-sz88w\") pod \"kube-proxy-vxnjm\" (UID: \"3c760937-0b14-418c-b5e4-372134fd77cd\") " pod="kube-system/kube-proxy-vxnjm" Dec 13 02:34:29.003137 kubelet[2719]: I1213 02:34:29.002917 2719 topology_manager.go:215] "Topology Admit Handler" podUID="4489afab-11e1-4998-8c33-be300f82b9a1" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-q978s" Dec 13 02:34:29.009418 kubelet[2719]: I1213 02:34:29.008644 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmwc5\" (UniqueName: \"kubernetes.io/projected/4489afab-11e1-4998-8c33-be300f82b9a1-kube-api-access-mmwc5\") pod \"tigera-operator-7bc55997bb-q978s\" (UID: \"4489afab-11e1-4998-8c33-be300f82b9a1\") " pod="tigera-operator/tigera-operator-7bc55997bb-q978s" Dec 13 02:34:29.009418 kubelet[2719]: I1213 02:34:29.008674 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4489afab-11e1-4998-8c33-be300f82b9a1-var-lib-calico\") pod \"tigera-operator-7bc55997bb-q978s\" (UID: \"4489afab-11e1-4998-8c33-be300f82b9a1\") " pod="tigera-operator/tigera-operator-7bc55997bb-q978s" Dec 13 02:34:29.011194 systemd[1]: Created slice kubepods-besteffort-pod4489afab_11e1_4998_8c33_be300f82b9a1.slice - libcontainer container kubepods-besteffort-pod4489afab_11e1_4998_8c33_be300f82b9a1.slice. Dec 13 02:34:29.063159 containerd[1492]: time="2024-12-13T02:34:29.063078952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxnjm,Uid:3c760937-0b14-418c-b5e4-372134fd77cd,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:29.089876 containerd[1492]: time="2024-12-13T02:34:29.088570194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:29.089876 containerd[1492]: time="2024-12-13T02:34:29.088633445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:29.089876 containerd[1492]: time="2024-12-13T02:34:29.088645939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:29.089876 containerd[1492]: time="2024-12-13T02:34:29.088837064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:29.119428 systemd[1]: run-containerd-runc-k8s.io-9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b-runc.tz8LwL.mount: Deactivated successfully. Dec 13 02:34:29.127280 systemd[1]: Started cri-containerd-9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b.scope - libcontainer container 9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b. Dec 13 02:34:29.153603 containerd[1492]: time="2024-12-13T02:34:29.153562999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxnjm,Uid:3c760937-0b14-418c-b5e4-372134fd77cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b\"" Dec 13 02:34:29.169297 containerd[1492]: time="2024-12-13T02:34:29.169218323Z" level=info msg="CreateContainer within sandbox \"9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:34:29.183763 containerd[1492]: time="2024-12-13T02:34:29.183703891Z" level=info msg="CreateContainer within sandbox \"9419b5578c8958392c9e53bafc0c58de3e0c4fac09ca68da1d135fe6b10b003b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b77ca970e37dd693bce835ec798589c51c21d2194671cd8837a0905efabd1240\"" Dec 13 02:34:29.184373 containerd[1492]: time="2024-12-13T02:34:29.184345177Z" level=info msg="StartContainer for \"b77ca970e37dd693bce835ec798589c51c21d2194671cd8837a0905efabd1240\"" Dec 13 02:34:29.216275 systemd[1]: Started cri-containerd-b77ca970e37dd693bce835ec798589c51c21d2194671cd8837a0905efabd1240.scope - libcontainer container b77ca970e37dd693bce835ec798589c51c21d2194671cd8837a0905efabd1240. Dec 13 02:34:29.242620 containerd[1492]: time="2024-12-13T02:34:29.242581515Z" level=info msg="StartContainer for \"b77ca970e37dd693bce835ec798589c51c21d2194671cd8837a0905efabd1240\" returns successfully" Dec 13 02:34:29.317197 containerd[1492]: time="2024-12-13T02:34:29.316575744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-q978s,Uid:4489afab-11e1-4998-8c33-be300f82b9a1,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:34:29.340397 containerd[1492]: time="2024-12-13T02:34:29.340299760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:29.342088 containerd[1492]: time="2024-12-13T02:34:29.341007132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:29.342088 containerd[1492]: time="2024-12-13T02:34:29.341034543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:29.342088 containerd[1492]: time="2024-12-13T02:34:29.341121751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:29.364249 systemd[1]: Started cri-containerd-9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942.scope - libcontainer container 9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942. Dec 13 02:34:29.406899 containerd[1492]: time="2024-12-13T02:34:29.406647714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-q978s,Uid:4489afab-11e1-4998-8c33-be300f82b9a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942\"" Dec 13 02:34:29.409837 containerd[1492]: time="2024-12-13T02:34:29.409313916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:34:30.102985 kubelet[2719]: I1213 02:34:30.102648 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vxnjm" podStartSLOduration=2.102629633 podStartE2EDuration="2.102629633s" podCreationTimestamp="2024-12-13 02:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:34:30.102431786 +0000 UTC m=+16.162915172" watchObservedRunningTime="2024-12-13 02:34:30.102629633 +0000 UTC m=+16.163113018" Dec 13 02:34:34.009639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125829783.mount: Deactivated successfully. Dec 13 02:34:34.364504 containerd[1492]: time="2024-12-13T02:34:34.364357907Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:34.365423 containerd[1492]: time="2024-12-13T02:34:34.365381056Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764297" Dec 13 02:34:34.366201 containerd[1492]: time="2024-12-13T02:34:34.366161143Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:34.367926 containerd[1492]: time="2024-12-13T02:34:34.367889215Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:34.368710 containerd[1492]: time="2024-12-13T02:34:34.368611762Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.959266006s" Dec 13 02:34:34.368710 containerd[1492]: time="2024-12-13T02:34:34.368637622Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 02:34:34.386836 containerd[1492]: time="2024-12-13T02:34:34.386658486Z" level=info msg="CreateContainer within sandbox \"9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:34:34.397359 containerd[1492]: time="2024-12-13T02:34:34.397294937Z" level=info msg="CreateContainer within sandbox \"9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8\"" Dec 13 02:34:34.402305 containerd[1492]: time="2024-12-13T02:34:34.401490372Z" level=info msg="StartContainer for \"eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8\"" Dec 13 02:34:34.434242 systemd[1]: Started cri-containerd-eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8.scope - libcontainer container eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8. Dec 13 02:34:34.456223 containerd[1492]: time="2024-12-13T02:34:34.456170871Z" level=info msg="StartContainer for \"eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8\" returns successfully" Dec 13 02:34:34.960831 systemd[1]: run-containerd-runc-k8s.io-eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8-runc.wbrpHV.mount: Deactivated successfully. Dec 13 02:34:37.386659 kubelet[2719]: I1213 02:34:37.385383 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-q978s" podStartSLOduration=4.421283017 podStartE2EDuration="9.385361537s" podCreationTimestamp="2024-12-13 02:34:28 +0000 UTC" firstStartedPulling="2024-12-13 02:34:29.408807319 +0000 UTC m=+15.469290704" lastFinishedPulling="2024-12-13 02:34:34.372885839 +0000 UTC m=+20.433369224" observedRunningTime="2024-12-13 02:34:35.135046636 +0000 UTC m=+21.195530062" watchObservedRunningTime="2024-12-13 02:34:37.385361537 +0000 UTC m=+23.445844922" Dec 13 02:34:37.391513 kubelet[2719]: I1213 02:34:37.391479 2719 topology_manager.go:215] "Topology Admit Handler" podUID="07a7c032-8742-47f4-a886-e88b5dacebfd" podNamespace="calico-system" podName="calico-typha-5c64f4584-bvcqk" Dec 13 02:34:37.402144 systemd[1]: Created slice kubepods-besteffort-pod07a7c032_8742_47f4_a886_e88b5dacebfd.slice - libcontainer container kubepods-besteffort-pod07a7c032_8742_47f4_a886_e88b5dacebfd.slice. Dec 13 02:34:37.466303 kubelet[2719]: I1213 02:34:37.466134 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/07a7c032-8742-47f4-a886-e88b5dacebfd-typha-certs\") pod \"calico-typha-5c64f4584-bvcqk\" (UID: \"07a7c032-8742-47f4-a886-e88b5dacebfd\") " pod="calico-system/calico-typha-5c64f4584-bvcqk" Dec 13 02:34:37.466303 kubelet[2719]: I1213 02:34:37.466231 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4xz\" (UniqueName: \"kubernetes.io/projected/07a7c032-8742-47f4-a886-e88b5dacebfd-kube-api-access-jt4xz\") pod \"calico-typha-5c64f4584-bvcqk\" (UID: \"07a7c032-8742-47f4-a886-e88b5dacebfd\") " pod="calico-system/calico-typha-5c64f4584-bvcqk" Dec 13 02:34:37.466303 kubelet[2719]: I1213 02:34:37.466251 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07a7c032-8742-47f4-a886-e88b5dacebfd-tigera-ca-bundle\") pod \"calico-typha-5c64f4584-bvcqk\" (UID: \"07a7c032-8742-47f4-a886-e88b5dacebfd\") " pod="calico-system/calico-typha-5c64f4584-bvcqk" Dec 13 02:34:37.491139 kubelet[2719]: I1213 02:34:37.490031 2719 topology_manager.go:215] "Topology Admit Handler" podUID="66f5f0e1-fcfc-4516-a925-ff028165afee" podNamespace="calico-system" podName="calico-node-9j2hm" Dec 13 02:34:37.497377 systemd[1]: Created slice kubepods-besteffort-pod66f5f0e1_fcfc_4516_a925_ff028165afee.slice - libcontainer container kubepods-besteffort-pod66f5f0e1_fcfc_4516_a925_ff028165afee.slice. Dec 13 02:34:37.566826 kubelet[2719]: I1213 02:34:37.566782 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-cni-net-dir\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567284 kubelet[2719]: I1213 02:34:37.567083 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-flexvol-driver-host\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567284 kubelet[2719]: I1213 02:34:37.567152 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmzt\" (UniqueName: \"kubernetes.io/projected/66f5f0e1-fcfc-4516-a925-ff028165afee-kube-api-access-gbmzt\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567284 kubelet[2719]: I1213 02:34:37.567212 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-cni-bin-dir\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567284 kubelet[2719]: I1213 02:34:37.567228 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-xtables-lock\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567284 kubelet[2719]: I1213 02:34:37.567242 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66f5f0e1-fcfc-4516-a925-ff028165afee-tigera-ca-bundle\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567427 kubelet[2719]: I1213 02:34:37.567262 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-policysync\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567776 kubelet[2719]: I1213 02:34:37.567576 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-cni-log-dir\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567776 kubelet[2719]: I1213 02:34:37.567600 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-lib-modules\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567776 kubelet[2719]: I1213 02:34:37.567614 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-var-lib-calico\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567776 kubelet[2719]: I1213 02:34:37.567645 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/66f5f0e1-fcfc-4516-a925-ff028165afee-node-certs\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.567776 kubelet[2719]: I1213 02:34:37.567659 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/66f5f0e1-fcfc-4516-a925-ff028165afee-var-run-calico\") pod \"calico-node-9j2hm\" (UID: \"66f5f0e1-fcfc-4516-a925-ff028165afee\") " pod="calico-system/calico-node-9j2hm" Dec 13 02:34:37.607644 kubelet[2719]: I1213 02:34:37.606778 2719 topology_manager.go:215] "Topology Admit Handler" podUID="ace80116-5126-48a5-986c-e83257cecc61" podNamespace="calico-system" podName="csi-node-driver-r8zln" Dec 13 02:34:37.607644 kubelet[2719]: E1213 02:34:37.607024 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:37.668552 kubelet[2719]: I1213 02:34:37.668383 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ace80116-5126-48a5-986c-e83257cecc61-registration-dir\") pod \"csi-node-driver-r8zln\" (UID: \"ace80116-5126-48a5-986c-e83257cecc61\") " pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:37.668552 kubelet[2719]: I1213 02:34:37.668430 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnvl\" (UniqueName: \"kubernetes.io/projected/ace80116-5126-48a5-986c-e83257cecc61-kube-api-access-gwnvl\") pod \"csi-node-driver-r8zln\" (UID: \"ace80116-5126-48a5-986c-e83257cecc61\") " pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:37.668552 kubelet[2719]: I1213 02:34:37.668465 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ace80116-5126-48a5-986c-e83257cecc61-varrun\") pod \"csi-node-driver-r8zln\" (UID: \"ace80116-5126-48a5-986c-e83257cecc61\") " pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:37.668552 kubelet[2719]: I1213 02:34:37.668502 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ace80116-5126-48a5-986c-e83257cecc61-socket-dir\") pod \"csi-node-driver-r8zln\" (UID: \"ace80116-5126-48a5-986c-e83257cecc61\") " pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:37.668552 kubelet[2719]: I1213 02:34:37.668528 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ace80116-5126-48a5-986c-e83257cecc61-kubelet-dir\") pod \"csi-node-driver-r8zln\" (UID: \"ace80116-5126-48a5-986c-e83257cecc61\") " pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:37.675598 kubelet[2719]: E1213 02:34:37.675575 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.675598 kubelet[2719]: W1213 02:34:37.675595 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.676076 kubelet[2719]: E1213 02:34:37.675611 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.684306 kubelet[2719]: E1213 02:34:37.684283 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.684306 kubelet[2719]: W1213 02:34:37.684300 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.684391 kubelet[2719]: E1213 02:34:37.684317 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.731231 containerd[1492]: time="2024-12-13T02:34:37.731149380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c64f4584-bvcqk,Uid:07a7c032-8742-47f4-a886-e88b5dacebfd,Namespace:calico-system,Attempt:0,}" Dec 13 02:34:37.757564 containerd[1492]: time="2024-12-13T02:34:37.757459270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:37.757564 containerd[1492]: time="2024-12-13T02:34:37.757530526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:37.757564 containerd[1492]: time="2024-12-13T02:34:37.757543471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:37.757564 containerd[1492]: time="2024-12-13T02:34:37.757645916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:37.770612 kubelet[2719]: E1213 02:34:37.770437 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.770612 kubelet[2719]: W1213 02:34:37.770455 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.770612 kubelet[2719]: E1213 02:34:37.770506 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.771579 kubelet[2719]: E1213 02:34:37.770761 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.771579 kubelet[2719]: W1213 02:34:37.770771 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.771579 kubelet[2719]: E1213 02:34:37.770782 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.771579 kubelet[2719]: E1213 02:34:37.771486 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.771579 kubelet[2719]: W1213 02:34:37.771499 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.771579 kubelet[2719]: E1213 02:34:37.771510 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.772033 kubelet[2719]: E1213 02:34:37.771924 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.772033 kubelet[2719]: W1213 02:34:37.771949 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.772033 kubelet[2719]: E1213 02:34:37.771962 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.772217 kubelet[2719]: E1213 02:34:37.772183 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.772339 kubelet[2719]: W1213 02:34:37.772276 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.772339 kubelet[2719]: E1213 02:34:37.772319 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.772649 kubelet[2719]: E1213 02:34:37.772588 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.772649 kubelet[2719]: W1213 02:34:37.772598 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.772649 kubelet[2719]: E1213 02:34:37.772647 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.773006 kubelet[2719]: E1213 02:34:37.772891 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.773006 kubelet[2719]: W1213 02:34:37.772900 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.773006 kubelet[2719]: E1213 02:34:37.772941 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.773275 kubelet[2719]: E1213 02:34:37.773202 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.773275 kubelet[2719]: W1213 02:34:37.773212 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.773275 kubelet[2719]: E1213 02:34:37.773226 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.773683 kubelet[2719]: E1213 02:34:37.773562 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.773683 kubelet[2719]: W1213 02:34:37.773572 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.773683 kubelet[2719]: E1213 02:34:37.773586 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.773917 kubelet[2719]: E1213 02:34:37.773842 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.773917 kubelet[2719]: W1213 02:34:37.773853 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.773917 kubelet[2719]: E1213 02:34:37.773868 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.774777 kubelet[2719]: E1213 02:34:37.774109 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.774777 kubelet[2719]: W1213 02:34:37.774142 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.774777 kubelet[2719]: E1213 02:34:37.774160 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.774777 kubelet[2719]: E1213 02:34:37.774350 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.774777 kubelet[2719]: W1213 02:34:37.774357 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.774777 kubelet[2719]: E1213 02:34:37.774462 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.774251 systemd[1]: Started cri-containerd-4e1fafe6ad510b4b2583290e3d5fadb55a8940b758ba1a0daf3180cccf8b425d.scope - libcontainer container 4e1fafe6ad510b4b2583290e3d5fadb55a8940b758ba1a0daf3180cccf8b425d. Dec 13 02:34:37.775544 kubelet[2719]: E1213 02:34:37.775255 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.775544 kubelet[2719]: W1213 02:34:37.775266 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.775544 kubelet[2719]: E1213 02:34:37.775333 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.775544 kubelet[2719]: E1213 02:34:37.775446 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.775544 kubelet[2719]: W1213 02:34:37.775453 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.776269 kubelet[2719]: E1213 02:34:37.776136 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.776269 kubelet[2719]: E1213 02:34:37.776191 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.776269 kubelet[2719]: W1213 02:34:37.776197 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.776384 kubelet[2719]: E1213 02:34:37.776352 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.776514 kubelet[2719]: E1213 02:34:37.776452 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.776514 kubelet[2719]: W1213 02:34:37.776461 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.776656 kubelet[2719]: E1213 02:34:37.776633 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.776916 kubelet[2719]: E1213 02:34:37.776854 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.776916 kubelet[2719]: W1213 02:34:37.776864 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.777211 kubelet[2719]: E1213 02:34:37.777148 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.777586 kubelet[2719]: E1213 02:34:37.777427 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.777586 kubelet[2719]: W1213 02:34:37.777436 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.777586 kubelet[2719]: E1213 02:34:37.777448 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.777863 kubelet[2719]: E1213 02:34:37.777710 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.777863 kubelet[2719]: W1213 02:34:37.777720 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.777863 kubelet[2719]: E1213 02:34:37.777729 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.778198 kubelet[2719]: E1213 02:34:37.778075 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.778198 kubelet[2719]: W1213 02:34:37.778086 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.778198 kubelet[2719]: E1213 02:34:37.778110 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.778462 kubelet[2719]: E1213 02:34:37.778450 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.778607 kubelet[2719]: W1213 02:34:37.778596 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.778726 kubelet[2719]: E1213 02:34:37.778667 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.779027 kubelet[2719]: E1213 02:34:37.778935 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.779027 kubelet[2719]: W1213 02:34:37.778945 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.779027 kubelet[2719]: E1213 02:34:37.778970 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.779494 kubelet[2719]: E1213 02:34:37.779341 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.779494 kubelet[2719]: W1213 02:34:37.779351 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.780507 kubelet[2719]: E1213 02:34:37.780450 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.780879 kubelet[2719]: E1213 02:34:37.780587 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.780879 kubelet[2719]: W1213 02:34:37.780597 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.780879 kubelet[2719]: E1213 02:34:37.780609 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.781132 kubelet[2719]: E1213 02:34:37.781120 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.781190 kubelet[2719]: W1213 02:34:37.781180 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.781239 kubelet[2719]: E1213 02:34:37.781229 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.798346 kubelet[2719]: E1213 02:34:37.798320 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:37.798850 kubelet[2719]: W1213 02:34:37.798691 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:37.798850 kubelet[2719]: E1213 02:34:37.798727 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:37.802274 containerd[1492]: time="2024-12-13T02:34:37.802158604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9j2hm,Uid:66f5f0e1-fcfc-4516-a925-ff028165afee,Namespace:calico-system,Attempt:0,}" Dec 13 02:34:37.849187 containerd[1492]: time="2024-12-13T02:34:37.849141016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c64f4584-bvcqk,Uid:07a7c032-8742-47f4-a886-e88b5dacebfd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e1fafe6ad510b4b2583290e3d5fadb55a8940b758ba1a0daf3180cccf8b425d\"" Dec 13 02:34:37.851647 containerd[1492]: time="2024-12-13T02:34:37.851351262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:34:37.854811 containerd[1492]: time="2024-12-13T02:34:37.854683515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:34:37.854811 containerd[1492]: time="2024-12-13T02:34:37.854793304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:34:37.854960 containerd[1492]: time="2024-12-13T02:34:37.854911640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:37.855147 containerd[1492]: time="2024-12-13T02:34:37.855082063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:34:37.874222 systemd[1]: Started cri-containerd-53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939.scope - libcontainer container 53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939. Dec 13 02:34:37.894615 containerd[1492]: time="2024-12-13T02:34:37.894452755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9j2hm,Uid:66f5f0e1-fcfc-4516-a925-ff028165afee,Namespace:calico-system,Attempt:0,} returns sandbox id \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\"" Dec 13 02:34:39.435842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581217902.mount: Deactivated successfully. Dec 13 02:34:40.018135 kubelet[2719]: E1213 02:34:40.018068 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:40.322031 containerd[1492]: time="2024-12-13T02:34:40.321918487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:40.322954 containerd[1492]: time="2024-12-13T02:34:40.322782599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 02:34:40.323733 containerd[1492]: time="2024-12-13T02:34:40.323686167Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:40.348893 containerd[1492]: time="2024-12-13T02:34:40.348845936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:40.349439 containerd[1492]: time="2024-12-13T02:34:40.349409858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.498032728s" Dec 13 02:34:40.349487 containerd[1492]: time="2024-12-13T02:34:40.349437481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 02:34:40.352088 containerd[1492]: time="2024-12-13T02:34:40.352063394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:34:40.370948 containerd[1492]: time="2024-12-13T02:34:40.370829215Z" level=info msg="CreateContainer within sandbox \"4e1fafe6ad510b4b2583290e3d5fadb55a8940b758ba1a0daf3180cccf8b425d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:34:40.384980 containerd[1492]: time="2024-12-13T02:34:40.384945646Z" level=info msg="CreateContainer within sandbox \"4e1fafe6ad510b4b2583290e3d5fadb55a8940b758ba1a0daf3180cccf8b425d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b8fb46bbd09fd1c197c475272d01b1b18c875aad1872c0548ea840241f58798\"" Dec 13 02:34:40.386238 containerd[1492]: time="2024-12-13T02:34:40.386167068Z" level=info msg="StartContainer for \"7b8fb46bbd09fd1c197c475272d01b1b18c875aad1872c0548ea840241f58798\"" Dec 13 02:34:40.432216 systemd[1]: Started cri-containerd-7b8fb46bbd09fd1c197c475272d01b1b18c875aad1872c0548ea840241f58798.scope - libcontainer container 7b8fb46bbd09fd1c197c475272d01b1b18c875aad1872c0548ea840241f58798. Dec 13 02:34:40.469901 containerd[1492]: time="2024-12-13T02:34:40.469352797Z" level=info msg="StartContainer for \"7b8fb46bbd09fd1c197c475272d01b1b18c875aad1872c0548ea840241f58798\" returns successfully" Dec 13 02:34:41.188450 kubelet[2719]: E1213 02:34:41.188409 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.188450 kubelet[2719]: W1213 02:34:41.188435 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.188450 kubelet[2719]: E1213 02:34:41.188453 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.188724 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189468 kubelet[2719]: W1213 02:34:41.188732 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.188741 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.188925 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189468 kubelet[2719]: W1213 02:34:41.188950 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.188959 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.189154 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189468 kubelet[2719]: W1213 02:34:41.189161 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.189168 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189468 kubelet[2719]: E1213 02:34:41.189409 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189804 kubelet[2719]: W1213 02:34:41.189416 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189804 kubelet[2719]: E1213 02:34:41.189423 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189804 kubelet[2719]: E1213 02:34:41.189622 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189804 kubelet[2719]: W1213 02:34:41.189630 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189804 kubelet[2719]: E1213 02:34:41.189638 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.189959 kubelet[2719]: E1213 02:34:41.189838 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.189959 kubelet[2719]: W1213 02:34:41.189845 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.189959 kubelet[2719]: E1213 02:34:41.189852 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.190062 kubelet[2719]: E1213 02:34:41.190047 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.190087 kubelet[2719]: W1213 02:34:41.190055 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.190127 kubelet[2719]: E1213 02:34:41.190121 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.190486 kubelet[2719]: E1213 02:34:41.190366 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.190486 kubelet[2719]: W1213 02:34:41.190381 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.190486 kubelet[2719]: E1213 02:34:41.190394 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.190671 kubelet[2719]: E1213 02:34:41.190629 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.190671 kubelet[2719]: W1213 02:34:41.190643 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.190671 kubelet[2719]: E1213 02:34:41.190654 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.190876 kubelet[2719]: E1213 02:34:41.190855 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.190876 kubelet[2719]: W1213 02:34:41.190863 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.190876 kubelet[2719]: E1213 02:34:41.190872 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.191123 kubelet[2719]: E1213 02:34:41.191075 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.191123 kubelet[2719]: W1213 02:34:41.191087 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.191123 kubelet[2719]: E1213 02:34:41.191112 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.191295 kubelet[2719]: E1213 02:34:41.191278 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.191295 kubelet[2719]: W1213 02:34:41.191290 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.191372 kubelet[2719]: E1213 02:34:41.191298 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.191521 kubelet[2719]: E1213 02:34:41.191501 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.191521 kubelet[2719]: W1213 02:34:41.191515 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.191595 kubelet[2719]: E1213 02:34:41.191526 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.191761 kubelet[2719]: E1213 02:34:41.191744 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.191761 kubelet[2719]: W1213 02:34:41.191756 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.191825 kubelet[2719]: E1213 02:34:41.191765 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.204087 kubelet[2719]: E1213 02:34:41.204064 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.204087 kubelet[2719]: W1213 02:34:41.204078 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.204197 kubelet[2719]: E1213 02:34:41.204089 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.204372 kubelet[2719]: E1213 02:34:41.204334 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.204372 kubelet[2719]: W1213 02:34:41.204347 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.204454 kubelet[2719]: E1213 02:34:41.204383 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.204666 kubelet[2719]: E1213 02:34:41.204644 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.204666 kubelet[2719]: W1213 02:34:41.204658 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.204741 kubelet[2719]: E1213 02:34:41.204670 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.204899 kubelet[2719]: E1213 02:34:41.204884 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.204899 kubelet[2719]: W1213 02:34:41.204895 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.204954 kubelet[2719]: E1213 02:34:41.204917 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.205149 kubelet[2719]: E1213 02:34:41.205132 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.205149 kubelet[2719]: W1213 02:34:41.205142 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.205246 kubelet[2719]: E1213 02:34:41.205162 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.205382 kubelet[2719]: E1213 02:34:41.205363 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.205382 kubelet[2719]: W1213 02:34:41.205375 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.205453 kubelet[2719]: E1213 02:34:41.205388 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.205599 kubelet[2719]: E1213 02:34:41.205581 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.205599 kubelet[2719]: W1213 02:34:41.205591 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.205979 kubelet[2719]: E1213 02:34:41.205643 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.205979 kubelet[2719]: E1213 02:34:41.205736 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.205979 kubelet[2719]: W1213 02:34:41.205743 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.205979 kubelet[2719]: E1213 02:34:41.205881 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.205979 kubelet[2719]: W1213 02:34:41.205887 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.205979 kubelet[2719]: E1213 02:34:41.205895 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.205979 kubelet[2719]: E1213 02:34:41.205920 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.206350 kubelet[2719]: E1213 02:34:41.206032 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.206350 kubelet[2719]: W1213 02:34:41.206039 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.206350 kubelet[2719]: E1213 02:34:41.206047 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.206350 kubelet[2719]: E1213 02:34:41.206233 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.206350 kubelet[2719]: W1213 02:34:41.206242 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.206350 kubelet[2719]: E1213 02:34:41.206255 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.206497 kubelet[2719]: E1213 02:34:41.206413 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.206497 kubelet[2719]: W1213 02:34:41.206420 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.206497 kubelet[2719]: E1213 02:34:41.206427 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.206753 kubelet[2719]: E1213 02:34:41.206736 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.206753 kubelet[2719]: W1213 02:34:41.206746 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.206810 kubelet[2719]: E1213 02:34:41.206758 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.207047 kubelet[2719]: E1213 02:34:41.206934 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.207047 kubelet[2719]: W1213 02:34:41.206944 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.207047 kubelet[2719]: E1213 02:34:41.206971 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.207207 kubelet[2719]: E1213 02:34:41.207189 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.207207 kubelet[2719]: W1213 02:34:41.207200 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.207207 kubelet[2719]: E1213 02:34:41.207209 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.207387 kubelet[2719]: E1213 02:34:41.207363 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.207387 kubelet[2719]: W1213 02:34:41.207377 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.207387 kubelet[2719]: E1213 02:34:41.207385 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.207574 kubelet[2719]: E1213 02:34:41.207555 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.207574 kubelet[2719]: W1213 02:34:41.207567 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.207574 kubelet[2719]: E1213 02:34:41.207575 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.207894 kubelet[2719]: E1213 02:34:41.207876 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:34:41.207894 kubelet[2719]: W1213 02:34:41.207887 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:34:41.207894 kubelet[2719]: E1213 02:34:41.207894 2719 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:34:41.915277 containerd[1492]: time="2024-12-13T02:34:41.915239271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:41.917201 containerd[1492]: time="2024-12-13T02:34:41.917166053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 02:34:41.918565 containerd[1492]: time="2024-12-13T02:34:41.917971233Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:41.923118 containerd[1492]: time="2024-12-13T02:34:41.921871547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:41.924091 containerd[1492]: time="2024-12-13T02:34:41.923692077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.571598426s" Dec 13 02:34:41.924465 containerd[1492]: time="2024-12-13T02:34:41.924171769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 02:34:41.926269 containerd[1492]: time="2024-12-13T02:34:41.926249337Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:34:41.947041 containerd[1492]: time="2024-12-13T02:34:41.947006853Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9\"" Dec 13 02:34:41.948686 containerd[1492]: time="2024-12-13T02:34:41.948667268Z" level=info msg="StartContainer for \"070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9\"" Dec 13 02:34:41.991954 systemd[1]: Started cri-containerd-070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9.scope - libcontainer container 070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9. Dec 13 02:34:42.019366 kubelet[2719]: E1213 02:34:42.019314 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:42.043671 containerd[1492]: time="2024-12-13T02:34:42.043384425Z" level=info msg="StartContainer for \"070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9\" returns successfully" Dec 13 02:34:42.065407 systemd[1]: cri-containerd-070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9.scope: Deactivated successfully. Dec 13 02:34:42.089538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9-rootfs.mount: Deactivated successfully. Dec 13 02:34:42.136429 kubelet[2719]: I1213 02:34:42.136374 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:34:42.150606 containerd[1492]: time="2024-12-13T02:34:42.141173536Z" level=info msg="shim disconnected" id=070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9 namespace=k8s.io Dec 13 02:34:42.151267 kubelet[2719]: I1213 02:34:42.151148 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c64f4584-bvcqk" podStartSLOduration=2.651763841 podStartE2EDuration="5.151131888s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="2024-12-13 02:34:37.850808069 +0000 UTC m=+23.911291454" lastFinishedPulling="2024-12-13 02:34:40.350176115 +0000 UTC m=+26.410659501" observedRunningTime="2024-12-13 02:34:41.140056133 +0000 UTC m=+27.200539529" watchObservedRunningTime="2024-12-13 02:34:42.151131888 +0000 UTC m=+28.211615284" Dec 13 02:34:42.152755 containerd[1492]: time="2024-12-13T02:34:42.150546896Z" level=warning msg="cleaning up after shim disconnected" id=070b3a6630531dcfed566adc56cf1faa69df0bac5617e00d07ea76e1576b8fe9 namespace=k8s.io Dec 13 02:34:42.152755 containerd[1492]: time="2024-12-13T02:34:42.152750964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:34:43.147400 containerd[1492]: time="2024-12-13T02:34:43.146073723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:34:44.018635 kubelet[2719]: E1213 02:34:44.018274 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:45.809373 kubelet[2719]: I1213 02:34:45.808396 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:34:46.017760 kubelet[2719]: E1213 02:34:46.017685 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:47.840952 containerd[1492]: time="2024-12-13T02:34:47.840268418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:47.840952 containerd[1492]: time="2024-12-13T02:34:47.840904064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 02:34:47.841826 containerd[1492]: time="2024-12-13T02:34:47.841779133Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:47.843641 containerd[1492]: time="2024-12-13T02:34:47.843592073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:47.844272 containerd[1492]: time="2024-12-13T02:34:47.844173336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.697019331s" Dec 13 02:34:47.844272 containerd[1492]: time="2024-12-13T02:34:47.844200348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 02:34:47.847157 containerd[1492]: time="2024-12-13T02:34:47.847120187Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:34:47.889804 containerd[1492]: time="2024-12-13T02:34:47.889755346Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1\"" Dec 13 02:34:47.891554 containerd[1492]: time="2024-12-13T02:34:47.890345055Z" level=info msg="StartContainer for \"d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1\"" Dec 13 02:34:47.937283 systemd[1]: Started cri-containerd-d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1.scope - libcontainer container d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1. Dec 13 02:34:47.972669 containerd[1492]: time="2024-12-13T02:34:47.972626662Z" level=info msg="StartContainer for \"d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1\" returns successfully" Dec 13 02:34:48.019032 kubelet[2719]: E1213 02:34:48.018989 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:48.365656 systemd[1]: cri-containerd-d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1.scope: Deactivated successfully. Dec 13 02:34:48.392887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1-rootfs.mount: Deactivated successfully. Dec 13 02:34:48.425543 containerd[1492]: time="2024-12-13T02:34:48.425468421Z" level=info msg="shim disconnected" id=d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1 namespace=k8s.io Dec 13 02:34:48.425543 containerd[1492]: time="2024-12-13T02:34:48.425532301Z" level=warning msg="cleaning up after shim disconnected" id=d46ab49bfecce60b7d8f5ad9bceea8f72b31cd9ab81655e106247ce5da5e75f1 namespace=k8s.io Dec 13 02:34:48.425543 containerd[1492]: time="2024-12-13T02:34:48.425542410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:34:48.433530 kubelet[2719]: I1213 02:34:48.433497 2719 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:34:48.464264 kubelet[2719]: I1213 02:34:48.463333 2719 topology_manager.go:215] "Topology Admit Handler" podUID="734a4b30-7cd7-4742-a781-37649c45d07d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5hplf" Dec 13 02:34:48.468814 kubelet[2719]: I1213 02:34:48.468792 2719 topology_manager.go:215] "Topology Admit Handler" podUID="48966772-4c3b-4bf8-9f84-e6adfcd1cd76" podNamespace="calico-apiserver" podName="calico-apiserver-6694c5f699-hm2qh" Dec 13 02:34:48.472258 kubelet[2719]: I1213 02:34:48.472243 2719 topology_manager.go:215] "Topology Admit Handler" podUID="bb682486-1ffb-4358-85cb-f917f79cfe39" podNamespace="calico-system" podName="calico-kube-controllers-59f554d884-m8hvz" Dec 13 02:34:48.473127 kubelet[2719]: I1213 02:34:48.473005 2719 topology_manager.go:215] "Topology Admit Handler" podUID="734b07b3-7e7c-45ff-9b3d-412416c83498" podNamespace="calico-apiserver" podName="calico-apiserver-6694c5f699-zpn8r" Dec 13 02:34:48.473977 kubelet[2719]: I1213 02:34:48.473502 2719 topology_manager.go:215] "Topology Admit Handler" podUID="274c04a1-3774-49b6-9c70-53365ee4ce31" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lx698" Dec 13 02:34:48.479612 systemd[1]: Created slice kubepods-burstable-pod734a4b30_7cd7_4742_a781_37649c45d07d.slice - libcontainer container kubepods-burstable-pod734a4b30_7cd7_4742_a781_37649c45d07d.slice. Dec 13 02:34:48.489430 systemd[1]: Created slice kubepods-besteffort-pod48966772_4c3b_4bf8_9f84_e6adfcd1cd76.slice - libcontainer container kubepods-besteffort-pod48966772_4c3b_4bf8_9f84_e6adfcd1cd76.slice. Dec 13 02:34:48.496082 systemd[1]: Created slice kubepods-besteffort-podbb682486_1ffb_4358_85cb_f917f79cfe39.slice - libcontainer container kubepods-besteffort-podbb682486_1ffb_4358_85cb_f917f79cfe39.slice. Dec 13 02:34:48.503425 systemd[1]: Created slice kubepods-burstable-pod274c04a1_3774_49b6_9c70_53365ee4ce31.slice - libcontainer container kubepods-burstable-pod274c04a1_3774_49b6_9c70_53365ee4ce31.slice. Dec 13 02:34:48.509723 systemd[1]: Created slice kubepods-besteffort-pod734b07b3_7e7c_45ff_9b3d_412416c83498.slice - libcontainer container kubepods-besteffort-pod734b07b3_7e7c_45ff_9b3d_412416c83498.slice. Dec 13 02:34:48.554894 kubelet[2719]: I1213 02:34:48.554843 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734a4b30-7cd7-4742-a781-37649c45d07d-config-volume\") pod \"coredns-7db6d8ff4d-5hplf\" (UID: \"734a4b30-7cd7-4742-a781-37649c45d07d\") " pod="kube-system/coredns-7db6d8ff4d-5hplf" Dec 13 02:34:48.554894 kubelet[2719]: I1213 02:34:48.554885 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjmcq\" (UniqueName: \"kubernetes.io/projected/48966772-4c3b-4bf8-9f84-e6adfcd1cd76-kube-api-access-sjmcq\") pod \"calico-apiserver-6694c5f699-hm2qh\" (UID: \"48966772-4c3b-4bf8-9f84-e6adfcd1cd76\") " pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" Dec 13 02:34:48.554894 kubelet[2719]: I1213 02:34:48.554904 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndj8\" (UniqueName: \"kubernetes.io/projected/bb682486-1ffb-4358-85cb-f917f79cfe39-kube-api-access-tndj8\") pod \"calico-kube-controllers-59f554d884-m8hvz\" (UID: \"bb682486-1ffb-4358-85cb-f917f79cfe39\") " pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" Dec 13 02:34:48.554894 kubelet[2719]: I1213 02:34:48.554923 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274c04a1-3774-49b6-9c70-53365ee4ce31-config-volume\") pod \"coredns-7db6d8ff4d-lx698\" (UID: \"274c04a1-3774-49b6-9c70-53365ee4ce31\") " pod="kube-system/coredns-7db6d8ff4d-lx698" Dec 13 02:34:48.554894 kubelet[2719]: I1213 02:34:48.554944 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ww4t\" (UniqueName: \"kubernetes.io/projected/734a4b30-7cd7-4742-a781-37649c45d07d-kube-api-access-4ww4t\") pod \"coredns-7db6d8ff4d-5hplf\" (UID: \"734a4b30-7cd7-4742-a781-37649c45d07d\") " pod="kube-system/coredns-7db6d8ff4d-5hplf" Dec 13 02:34:48.555275 kubelet[2719]: I1213 02:34:48.555000 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/48966772-4c3b-4bf8-9f84-e6adfcd1cd76-calico-apiserver-certs\") pod \"calico-apiserver-6694c5f699-hm2qh\" (UID: \"48966772-4c3b-4bf8-9f84-e6adfcd1cd76\") " pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" Dec 13 02:34:48.555275 kubelet[2719]: I1213 02:34:48.555016 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb682486-1ffb-4358-85cb-f917f79cfe39-tigera-ca-bundle\") pod \"calico-kube-controllers-59f554d884-m8hvz\" (UID: \"bb682486-1ffb-4358-85cb-f917f79cfe39\") " pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" Dec 13 02:34:48.555275 kubelet[2719]: I1213 02:34:48.555032 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/734b07b3-7e7c-45ff-9b3d-412416c83498-calico-apiserver-certs\") pod \"calico-apiserver-6694c5f699-zpn8r\" (UID: \"734b07b3-7e7c-45ff-9b3d-412416c83498\") " pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" Dec 13 02:34:48.555275 kubelet[2719]: I1213 02:34:48.555046 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5bkp\" (UniqueName: \"kubernetes.io/projected/734b07b3-7e7c-45ff-9b3d-412416c83498-kube-api-access-h5bkp\") pod \"calico-apiserver-6694c5f699-zpn8r\" (UID: \"734b07b3-7e7c-45ff-9b3d-412416c83498\") " pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" Dec 13 02:34:48.555275 kubelet[2719]: I1213 02:34:48.555065 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vd2\" (UniqueName: \"kubernetes.io/projected/274c04a1-3774-49b6-9c70-53365ee4ce31-kube-api-access-42vd2\") pod \"coredns-7db6d8ff4d-lx698\" (UID: \"274c04a1-3774-49b6-9c70-53365ee4ce31\") " pod="kube-system/coredns-7db6d8ff4d-lx698" Dec 13 02:34:48.787751 containerd[1492]: time="2024-12-13T02:34:48.787631080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hplf,Uid:734a4b30-7cd7-4742-a781-37649c45d07d,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:48.795299 containerd[1492]: time="2024-12-13T02:34:48.795261403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-hm2qh,Uid:48966772-4c3b-4bf8-9f84-e6adfcd1cd76,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:34:48.802699 containerd[1492]: time="2024-12-13T02:34:48.802666969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f554d884-m8hvz,Uid:bb682486-1ffb-4358-85cb-f917f79cfe39,Namespace:calico-system,Attempt:0,}" Dec 13 02:34:48.807039 containerd[1492]: time="2024-12-13T02:34:48.806827109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lx698,Uid:274c04a1-3774-49b6-9c70-53365ee4ce31,Namespace:kube-system,Attempt:0,}" Dec 13 02:34:48.851634 containerd[1492]: time="2024-12-13T02:34:48.851211193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-zpn8r,Uid:734b07b3-7e7c-45ff-9b3d-412416c83498,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:34:49.051728 containerd[1492]: time="2024-12-13T02:34:49.051389850Z" level=error msg="Failed to destroy network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.052311 containerd[1492]: time="2024-12-13T02:34:49.052279237Z" level=error msg="Failed to destroy network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.052865 containerd[1492]: time="2024-12-13T02:34:49.052672833Z" level=error msg="encountered an error cleaning up failed sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.052865 containerd[1492]: time="2024-12-13T02:34:49.052731304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f554d884-m8hvz,Uid:bb682486-1ffb-4358-85cb-f917f79cfe39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.054631 containerd[1492]: time="2024-12-13T02:34:49.054609496Z" level=error msg="Failed to destroy network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.055084 containerd[1492]: time="2024-12-13T02:34:49.055015386Z" level=error msg="encountered an error cleaning up failed sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.055084 containerd[1492]: time="2024-12-13T02:34:49.055054730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-hm2qh,Uid:48966772-4c3b-4bf8-9f84-e6adfcd1cd76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.058841 containerd[1492]: time="2024-12-13T02:34:49.052696478Z" level=error msg="encountered an error cleaning up failed sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.058841 containerd[1492]: time="2024-12-13T02:34:49.058165648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hplf,Uid:734a4b30-7cd7-4742-a781-37649c45d07d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.058841 containerd[1492]: time="2024-12-13T02:34:49.058246833Z" level=error msg="Failed to destroy network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.058841 containerd[1492]: time="2024-12-13T02:34:49.058493760Z" level=error msg="encountered an error cleaning up failed sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.058841 containerd[1492]: time="2024-12-13T02:34:49.058519440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-zpn8r,Uid:734b07b3-7e7c-45ff-9b3d-412416c83498,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.059006 kubelet[2719]: E1213 02:34:49.058698 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.059006 kubelet[2719]: E1213 02:34:49.058757 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" Dec 13 02:34:49.059006 kubelet[2719]: E1213 02:34:49.058774 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" Dec 13 02:34:49.060290 kubelet[2719]: E1213 02:34:49.058808 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6694c5f699-zpn8r_calico-apiserver(734b07b3-7e7c-45ff-9b3d-412416c83498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6694c5f699-zpn8r_calico-apiserver(734b07b3-7e7c-45ff-9b3d-412416c83498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" podUID="734b07b3-7e7c-45ff-9b3d-412416c83498" Dec 13 02:34:49.060290 kubelet[2719]: E1213 02:34:49.059432 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.060290 kubelet[2719]: E1213 02:34:49.059462 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" Dec 13 02:34:49.060385 kubelet[2719]: E1213 02:34:49.059477 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" Dec 13 02:34:49.060385 kubelet[2719]: E1213 02:34:49.059500 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6694c5f699-hm2qh_calico-apiserver(48966772-4c3b-4bf8-9f84-e6adfcd1cd76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6694c5f699-hm2qh_calico-apiserver(48966772-4c3b-4bf8-9f84-e6adfcd1cd76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" podUID="48966772-4c3b-4bf8-9f84-e6adfcd1cd76" Dec 13 02:34:49.060385 kubelet[2719]: E1213 02:34:49.059526 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.060540 kubelet[2719]: E1213 02:34:49.059540 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" Dec 13 02:34:49.060540 kubelet[2719]: E1213 02:34:49.059551 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" Dec 13 02:34:49.060540 kubelet[2719]: E1213 02:34:49.059585 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59f554d884-m8hvz_calico-system(bb682486-1ffb-4358-85cb-f917f79cfe39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59f554d884-m8hvz_calico-system(bb682486-1ffb-4358-85cb-f917f79cfe39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" podUID="bb682486-1ffb-4358-85cb-f917f79cfe39" Dec 13 02:34:49.060643 kubelet[2719]: E1213 02:34:49.059614 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.060643 kubelet[2719]: E1213 02:34:49.059628 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5hplf" Dec 13 02:34:49.060643 kubelet[2719]: E1213 02:34:49.059639 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5hplf" Dec 13 02:34:49.060703 kubelet[2719]: E1213 02:34:49.059668 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5hplf_kube-system(734a4b30-7cd7-4742-a781-37649c45d07d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5hplf_kube-system(734a4b30-7cd7-4742-a781-37649c45d07d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5hplf" podUID="734a4b30-7cd7-4742-a781-37649c45d07d" Dec 13 02:34:49.063052 containerd[1492]: time="2024-12-13T02:34:49.062721017Z" level=error msg="Failed to destroy network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.063176 containerd[1492]: time="2024-12-13T02:34:49.063047646Z" level=error msg="encountered an error cleaning up failed sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.063176 containerd[1492]: time="2024-12-13T02:34:49.063090207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lx698,Uid:274c04a1-3774-49b6-9c70-53365ee4ce31,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.063326 kubelet[2719]: E1213 02:34:49.063298 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.063372 kubelet[2719]: E1213 02:34:49.063328 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lx698" Dec 13 02:34:49.063372 kubelet[2719]: E1213 02:34:49.063342 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lx698" Dec 13 02:34:49.063469 kubelet[2719]: E1213 02:34:49.063368 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lx698_kube-system(274c04a1-3774-49b6-9c70-53365ee4ce31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lx698_kube-system(274c04a1-3774-49b6-9c70-53365ee4ce31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lx698" podUID="274c04a1-3774-49b6-9c70-53365ee4ce31" Dec 13 02:34:49.165118 kubelet[2719]: I1213 02:34:49.165054 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:34:49.169067 kubelet[2719]: I1213 02:34:49.168038 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:34:49.170414 containerd[1492]: time="2024-12-13T02:34:49.170370493Z" level=info msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" Dec 13 02:34:49.172036 containerd[1492]: time="2024-12-13T02:34:49.171536214Z" level=info msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" Dec 13 02:34:49.174674 containerd[1492]: time="2024-12-13T02:34:49.174647171Z" level=info msg="Ensure that sandbox fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1 in task-service has been cleanup successfully" Dec 13 02:34:49.175070 containerd[1492]: time="2024-12-13T02:34:49.174661750Z" level=info msg="Ensure that sandbox e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996 in task-service has been cleanup successfully" Dec 13 02:34:49.176742 kubelet[2719]: I1213 02:34:49.176722 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:34:49.178731 containerd[1492]: time="2024-12-13T02:34:49.178581693Z" level=info msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" Dec 13 02:34:49.178776 containerd[1492]: time="2024-12-13T02:34:49.178760592Z" level=info msg="Ensure that sandbox 6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63 in task-service has been cleanup successfully" Dec 13 02:34:49.181891 kubelet[2719]: I1213 02:34:49.181620 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:34:49.183633 containerd[1492]: time="2024-12-13T02:34:49.183422722Z" level=info msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" Dec 13 02:34:49.185618 containerd[1492]: time="2024-12-13T02:34:49.185114671Z" level=info msg="Ensure that sandbox 4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64 in task-service has been cleanup successfully" Dec 13 02:34:49.189652 kubelet[2719]: I1213 02:34:49.189180 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:34:49.191001 containerd[1492]: time="2024-12-13T02:34:49.190979071Z" level=info msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" Dec 13 02:34:49.194552 containerd[1492]: time="2024-12-13T02:34:49.194532449Z" level=info msg="Ensure that sandbox b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2 in task-service has been cleanup successfully" Dec 13 02:34:49.199704 containerd[1492]: time="2024-12-13T02:34:49.199676003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:34:49.272166 containerd[1492]: time="2024-12-13T02:34:49.272116394Z" level=error msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" failed" error="failed to destroy network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.272581 containerd[1492]: time="2024-12-13T02:34:49.272127896Z" level=error msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" failed" error="failed to destroy network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.272649 kubelet[2719]: E1213 02:34:49.272351 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:34:49.272649 kubelet[2719]: E1213 02:34:49.272361 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:34:49.272649 kubelet[2719]: E1213 02:34:49.272401 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996"} Dec 13 02:34:49.272649 kubelet[2719]: E1213 02:34:49.272455 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"274c04a1-3774-49b6-9c70-53365ee4ce31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:49.272817 kubelet[2719]: E1213 02:34:49.272477 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"274c04a1-3774-49b6-9c70-53365ee4ce31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lx698" podUID="274c04a1-3774-49b6-9c70-53365ee4ce31" Dec 13 02:34:49.272817 kubelet[2719]: E1213 02:34:49.272485 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64"} Dec 13 02:34:49.272817 kubelet[2719]: E1213 02:34:49.272510 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48966772-4c3b-4bf8-9f84-e6adfcd1cd76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:49.272817 kubelet[2719]: E1213 02:34:49.272528 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48966772-4c3b-4bf8-9f84-e6adfcd1cd76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" podUID="48966772-4c3b-4bf8-9f84-e6adfcd1cd76" Dec 13 02:34:49.273213 containerd[1492]: time="2024-12-13T02:34:49.272957860Z" level=error msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" failed" error="failed to destroy network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.273608 kubelet[2719]: E1213 02:34:49.273418 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:34:49.273608 kubelet[2719]: E1213 02:34:49.273442 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1"} Dec 13 02:34:49.273608 kubelet[2719]: E1213 02:34:49.273516 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"734b07b3-7e7c-45ff-9b3d-412416c83498\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:49.273608 kubelet[2719]: E1213 02:34:49.273534 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"734b07b3-7e7c-45ff-9b3d-412416c83498\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" podUID="734b07b3-7e7c-45ff-9b3d-412416c83498" Dec 13 02:34:49.277439 containerd[1492]: time="2024-12-13T02:34:49.277397838Z" level=error msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" failed" error="failed to destroy network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.277674 kubelet[2719]: E1213 02:34:49.277634 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:34:49.277674 kubelet[2719]: E1213 02:34:49.277673 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63"} Dec 13 02:34:49.277760 kubelet[2719]: E1213 02:34:49.277692 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb682486-1ffb-4358-85cb-f917f79cfe39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:49.277760 kubelet[2719]: E1213 02:34:49.277707 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb682486-1ffb-4358-85cb-f917f79cfe39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" podUID="bb682486-1ffb-4358-85cb-f917f79cfe39" Dec 13 02:34:49.279305 containerd[1492]: time="2024-12-13T02:34:49.279268917Z" level=error msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" failed" error="failed to destroy network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:49.279489 kubelet[2719]: E1213 02:34:49.279376 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:34:49.279489 kubelet[2719]: E1213 02:34:49.279398 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2"} Dec 13 02:34:49.279489 kubelet[2719]: E1213 02:34:49.279417 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"734a4b30-7cd7-4742-a781-37649c45d07d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:49.279489 kubelet[2719]: E1213 02:34:49.279433 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"734a4b30-7cd7-4742-a781-37649c45d07d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5hplf" podUID="734a4b30-7cd7-4742-a781-37649c45d07d" Dec 13 02:34:49.878988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1-shm.mount: Deactivated successfully. Dec 13 02:34:49.879145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996-shm.mount: Deactivated successfully. Dec 13 02:34:49.879232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63-shm.mount: Deactivated successfully. Dec 13 02:34:49.879312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64-shm.mount: Deactivated successfully. Dec 13 02:34:49.879394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2-shm.mount: Deactivated successfully. Dec 13 02:34:50.023634 systemd[1]: Created slice kubepods-besteffort-podace80116_5126_48a5_986c_e83257cecc61.slice - libcontainer container kubepods-besteffort-podace80116_5126_48a5_986c_e83257cecc61.slice. Dec 13 02:34:50.026077 containerd[1492]: time="2024-12-13T02:34:50.026036964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8zln,Uid:ace80116-5126-48a5-986c-e83257cecc61,Namespace:calico-system,Attempt:0,}" Dec 13 02:34:50.089120 containerd[1492]: time="2024-12-13T02:34:50.089050310Z" level=error msg="Failed to destroy network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:50.089486 containerd[1492]: time="2024-12-13T02:34:50.089458784Z" level=error msg="encountered an error cleaning up failed sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:50.089541 containerd[1492]: time="2024-12-13T02:34:50.089511925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8zln,Uid:ace80116-5126-48a5-986c-e83257cecc61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:50.090051 kubelet[2719]: E1213 02:34:50.089735 2719 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:50.090051 kubelet[2719]: E1213 02:34:50.089784 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:50.090051 kubelet[2719]: E1213 02:34:50.089802 2719 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r8zln" Dec 13 02:34:50.090397 kubelet[2719]: E1213 02:34:50.089844 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r8zln_calico-system(ace80116-5126-48a5-986c-e83257cecc61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r8zln_calico-system(ace80116-5126-48a5-986c-e83257cecc61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:50.091640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51-shm.mount: Deactivated successfully. Dec 13 02:34:50.200900 kubelet[2719]: I1213 02:34:50.200869 2719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:34:50.202551 containerd[1492]: time="2024-12-13T02:34:50.201448296Z" level=info msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" Dec 13 02:34:50.205500 containerd[1492]: time="2024-12-13T02:34:50.205467926Z" level=info msg="Ensure that sandbox de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51 in task-service has been cleanup successfully" Dec 13 02:34:50.235203 containerd[1492]: time="2024-12-13T02:34:50.235153150Z" level=error msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" failed" error="failed to destroy network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:34:50.235500 kubelet[2719]: E1213 02:34:50.235454 2719 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:34:50.235565 kubelet[2719]: E1213 02:34:50.235513 2719 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51"} Dec 13 02:34:50.235588 kubelet[2719]: E1213 02:34:50.235574 2719 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ace80116-5126-48a5-986c-e83257cecc61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:34:50.235649 kubelet[2719]: E1213 02:34:50.235606 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ace80116-5126-48a5-986c-e83257cecc61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r8zln" podUID="ace80116-5126-48a5-986c-e83257cecc61" Dec 13 02:34:56.094868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574744450.mount: Deactivated successfully. Dec 13 02:34:56.170488 containerd[1492]: time="2024-12-13T02:34:56.169549176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 02:34:56.178530 containerd[1492]: time="2024-12-13T02:34:56.178502706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.972828006s" Dec 13 02:34:56.178964 containerd[1492]: time="2024-12-13T02:34:56.178613556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 02:34:56.180073 containerd[1492]: time="2024-12-13T02:34:56.180000072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:56.211398 containerd[1492]: time="2024-12-13T02:34:56.211057235Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:56.213119 containerd[1492]: time="2024-12-13T02:34:56.212885897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:34:56.259382 containerd[1492]: time="2024-12-13T02:34:56.259337744Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:34:56.303824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743060484.mount: Deactivated successfully. Dec 13 02:34:56.346315 containerd[1492]: time="2024-12-13T02:34:56.346129849Z" level=info msg="CreateContainer within sandbox \"53528e1ea1c6dce2f0082f6a289c13b0ee5ab5f1c710567752de6386b9b44939\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439\"" Dec 13 02:34:56.352347 containerd[1492]: time="2024-12-13T02:34:56.352309938Z" level=info msg="StartContainer for \"15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439\"" Dec 13 02:34:56.484262 systemd[1]: Started cri-containerd-15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439.scope - libcontainer container 15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439. Dec 13 02:34:56.535027 containerd[1492]: time="2024-12-13T02:34:56.534980679Z" level=info msg="StartContainer for \"15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439\" returns successfully" Dec 13 02:34:56.611139 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:34:56.613121 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:34:57.317871 kubelet[2719]: I1213 02:34:57.316063 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9j2hm" podStartSLOduration=1.984405687 podStartE2EDuration="20.298741565s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="2024-12-13 02:34:37.897052254 +0000 UTC m=+23.957535639" lastFinishedPulling="2024-12-13 02:34:56.211388131 +0000 UTC m=+42.271871517" observedRunningTime="2024-12-13 02:34:57.298127011 +0000 UTC m=+43.358610407" watchObservedRunningTime="2024-12-13 02:34:57.298741565 +0000 UTC m=+43.359224951" Dec 13 02:34:58.243166 kernel: bpftool[3917]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:34:58.254994 kubelet[2719]: I1213 02:34:58.254392 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:34:58.450966 systemd-networkd[1394]: vxlan.calico: Link UP Dec 13 02:34:58.451312 systemd-networkd[1394]: vxlan.calico: Gained carrier Dec 13 02:34:59.635340 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Dec 13 02:34:59.929008 systemd[1]: Started sshd@7-78.47.218.196:22-167.94.145.109:53478.service - OpenSSH per-connection server daemon (167.94.145.109:53478). Dec 13 02:35:00.019057 containerd[1492]: time="2024-12-13T02:35:00.018936263Z" level=info msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.099 [INFO][4008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.099 [INFO][4008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" iface="eth0" netns="/var/run/netns/cni-565caddc-5444-ab1f-caf7-c045d9b562af" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.099 [INFO][4008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" iface="eth0" netns="/var/run/netns/cni-565caddc-5444-ab1f-caf7-c045d9b562af" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.102 [INFO][4008] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" iface="eth0" netns="/var/run/netns/cni-565caddc-5444-ab1f-caf7-c045d9b562af" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.102 [INFO][4008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.102 [INFO][4008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.243 [INFO][4016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.250 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.250 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.261 [WARNING][4016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.261 [INFO][4016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.263 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:00.268030 containerd[1492]: 2024-12-13 02:35:00.265 [INFO][4008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:00.272977 containerd[1492]: time="2024-12-13T02:35:00.272838452Z" level=info msg="TearDown network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" successfully" Dec 13 02:35:00.272977 containerd[1492]: time="2024-12-13T02:35:00.272869831Z" level=info msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" returns successfully" Dec 13 02:35:00.273275 systemd[1]: run-netns-cni\x2d565caddc\x2d5444\x2dab1f\x2dcaf7\x2dc045d9b562af.mount: Deactivated successfully. Dec 13 02:35:00.274238 containerd[1492]: time="2024-12-13T02:35:00.274149743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-hm2qh,Uid:48966772-4c3b-4bf8-9f84-e6adfcd1cd76,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:35:00.412370 systemd-networkd[1394]: cali9926a45c73a: Link UP Dec 13 02:35:00.413490 systemd-networkd[1394]: cali9926a45c73a: Gained carrier Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.336 [INFO][4024] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0 calico-apiserver-6694c5f699- calico-apiserver 48966772-4c3b-4bf8-9f84-e6adfcd1cd76 750 0 2024-12-13 02:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6694c5f699 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c calico-apiserver-6694c5f699-hm2qh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9926a45c73a [] []}} ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.336 [INFO][4024] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.365 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" HandleID="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.377 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" HandleID="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"calico-apiserver-6694c5f699-hm2qh", "timestamp":"2024-12-13 02:35:00.365202917 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.378 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.378 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.378 [INFO][4035] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.380 [INFO][4035] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.387 [INFO][4035] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.390 [INFO][4035] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.391 [INFO][4035] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.393 [INFO][4035] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.393 [INFO][4035] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.395 [INFO][4035] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.398 [INFO][4035] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.403 [INFO][4035] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.129/26] block=192.168.30.128/26 handle="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.403 [INFO][4035] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.129/26] handle="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.403 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:00.431182 containerd[1492]: 2024-12-13 02:35:00.403 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.129/26] IPv6=[] ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" HandleID="k8s-pod-network.a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.407 [INFO][4024] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"48966772-4c3b-4bf8-9f84-e6adfcd1cd76", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"calico-apiserver-6694c5f699-hm2qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9926a45c73a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.407 [INFO][4024] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.129/32] ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.407 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9926a45c73a ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.414 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.414 [INFO][4024] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"48966772-4c3b-4bf8-9f84-e6adfcd1cd76", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce", Pod:"calico-apiserver-6694c5f699-hm2qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9926a45c73a", MAC:"b6:f5:46:46:2a:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:00.432028 containerd[1492]: 2024-12-13 02:35:00.423 [INFO][4024] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-hm2qh" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:00.492857 containerd[1492]: time="2024-12-13T02:35:00.492503367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:00.492857 containerd[1492]: time="2024-12-13T02:35:00.492562769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:00.492857 containerd[1492]: time="2024-12-13T02:35:00.492584540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:00.492857 containerd[1492]: time="2024-12-13T02:35:00.492686413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:00.520261 systemd[1]: Started cri-containerd-a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce.scope - libcontainer container a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce. Dec 13 02:35:00.563653 containerd[1492]: time="2024-12-13T02:35:00.563544477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-hm2qh,Uid:48966772-4c3b-4bf8-9f84-e6adfcd1cd76,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce\"" Dec 13 02:35:00.567603 containerd[1492]: time="2024-12-13T02:35:00.567358452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:35:01.018296 containerd[1492]: time="2024-12-13T02:35:01.018054446Z" level=info msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.055 [INFO][4108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.055 [INFO][4108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" iface="eth0" netns="/var/run/netns/cni-b8eec63b-0e05-975c-a39e-23ba3565f691" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.055 [INFO][4108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" iface="eth0" netns="/var/run/netns/cni-b8eec63b-0e05-975c-a39e-23ba3565f691" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.057 [INFO][4108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" iface="eth0" netns="/var/run/netns/cni-b8eec63b-0e05-975c-a39e-23ba3565f691" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.057 [INFO][4108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.057 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.078 [INFO][4115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.078 [INFO][4115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.078 [INFO][4115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.084 [WARNING][4115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.084 [INFO][4115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.086 [INFO][4115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:01.091003 containerd[1492]: 2024-12-13 02:35:01.089 [INFO][4108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:01.092989 containerd[1492]: time="2024-12-13T02:35:01.091693586Z" level=info msg="TearDown network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" successfully" Dec 13 02:35:01.092989 containerd[1492]: time="2024-12-13T02:35:01.091719053Z" level=info msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" returns successfully" Dec 13 02:35:01.092989 containerd[1492]: time="2024-12-13T02:35:01.092386235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8zln,Uid:ace80116-5126-48a5-986c-e83257cecc61,Namespace:calico-system,Attempt:1,}" Dec 13 02:35:01.182052 systemd-networkd[1394]: cali88cd9b167ef: Link UP Dec 13 02:35:01.182774 systemd-networkd[1394]: cali88cd9b167ef: Gained carrier Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.127 [INFO][4122] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0 csi-node-driver- calico-system ace80116-5126-48a5-986c-e83257cecc61 758 0 2024-12-13 02:34:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c csi-node-driver-r8zln eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali88cd9b167ef [] []}} ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.127 [INFO][4122] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.150 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" HandleID="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.157 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" HandleID="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"csi-node-driver-r8zln", "timestamp":"2024-12-13 02:35:01.15087959 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.157 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.157 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.157 [INFO][4132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.159 [INFO][4132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.162 [INFO][4132] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.165 [INFO][4132] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.166 [INFO][4132] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.168 [INFO][4132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.168 [INFO][4132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.169 [INFO][4132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05 Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.172 [INFO][4132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.176 [INFO][4132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.130/26] block=192.168.30.128/26 handle="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.176 [INFO][4132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.130/26] handle="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.176 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:01.201590 containerd[1492]: 2024-12-13 02:35:01.176 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.130/26] IPv6=[] ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" HandleID="k8s-pod-network.37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.178 [INFO][4122] cni-plugin/k8s.go 386: Populated endpoint ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ace80116-5126-48a5-986c-e83257cecc61", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"csi-node-driver-r8zln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88cd9b167ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.179 [INFO][4122] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.130/32] ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.179 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88cd9b167ef ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.183 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.183 [INFO][4122] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ace80116-5126-48a5-986c-e83257cecc61", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05", Pod:"csi-node-driver-r8zln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88cd9b167ef", MAC:"f6:b9:c1:ff:9f:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:01.203535 containerd[1492]: 2024-12-13 02:35:01.193 [INFO][4122] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05" Namespace="calico-system" Pod="csi-node-driver-r8zln" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:01.224323 containerd[1492]: time="2024-12-13T02:35:01.223987364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:01.224323 containerd[1492]: time="2024-12-13T02:35:01.224062226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:01.224323 containerd[1492]: time="2024-12-13T02:35:01.224079027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:01.224610 containerd[1492]: time="2024-12-13T02:35:01.224292581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:01.247464 systemd[1]: Started cri-containerd-37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05.scope - libcontainer container 37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05. Dec 13 02:35:01.273293 containerd[1492]: time="2024-12-13T02:35:01.271859542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r8zln,Uid:ace80116-5126-48a5-986c-e83257cecc61,Namespace:calico-system,Attempt:1,} returns sandbox id \"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05\"" Dec 13 02:35:01.275798 systemd[1]: run-containerd-runc-k8s.io-a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce-runc.Velah5.mount: Deactivated successfully. Dec 13 02:35:01.277306 systemd[1]: run-netns-cni\x2db8eec63b\x2d0e05\x2d975c\x2da39e\x2d23ba3565f691.mount: Deactivated successfully. Dec 13 02:35:02.003358 systemd-networkd[1394]: cali9926a45c73a: Gained IPv6LL Dec 13 02:35:02.963925 systemd-networkd[1394]: cali88cd9b167ef: Gained IPv6LL Dec 13 02:35:03.020016 containerd[1492]: time="2024-12-13T02:35:03.019631770Z" level=info msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.078 [INFO][4210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.079 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" iface="eth0" netns="/var/run/netns/cni-47b58507-8b16-7e6a-7606-4b21cc44c3f6" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.080 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" iface="eth0" netns="/var/run/netns/cni-47b58507-8b16-7e6a-7606-4b21cc44c3f6" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.081 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" iface="eth0" netns="/var/run/netns/cni-47b58507-8b16-7e6a-7606-4b21cc44c3f6" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.081 [INFO][4210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.081 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.117 [INFO][4216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.117 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.117 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.123 [WARNING][4216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.123 [INFO][4216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.125 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:03.131393 containerd[1492]: 2024-12-13 02:35:03.128 [INFO][4210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:03.132359 containerd[1492]: time="2024-12-13T02:35:03.131854826Z" level=info msg="TearDown network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" successfully" Dec 13 02:35:03.132359 containerd[1492]: time="2024-12-13T02:35:03.131878261Z" level=info msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" returns successfully" Dec 13 02:35:03.135569 containerd[1492]: time="2024-12-13T02:35:03.134771711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lx698,Uid:274c04a1-3774-49b6-9c70-53365ee4ce31,Namespace:kube-system,Attempt:1,}" Dec 13 02:35:03.136014 systemd[1]: run-netns-cni\x2d47b58507\x2d8b16\x2d7e6a\x2d7606\x2d4b21cc44c3f6.mount: Deactivated successfully. Dec 13 02:35:03.312282 systemd-networkd[1394]: califdbc1a6c24c: Link UP Dec 13 02:35:03.312685 systemd-networkd[1394]: califdbc1a6c24c: Gained carrier Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.211 [INFO][4223] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0 coredns-7db6d8ff4d- kube-system 274c04a1-3774-49b6-9c70-53365ee4ce31 769 0 2024-12-13 02:34:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c coredns-7db6d8ff4d-lx698 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califdbc1a6c24c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.211 [INFO][4223] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.252 [INFO][4236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" HandleID="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.270 [INFO][4236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" HandleID="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319070), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"coredns-7db6d8ff4d-lx698", "timestamp":"2024-12-13 02:35:03.252931743 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.270 [INFO][4236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.270 [INFO][4236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.271 [INFO][4236] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.272 [INFO][4236] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.277 [INFO][4236] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.282 [INFO][4236] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.284 [INFO][4236] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.287 [INFO][4236] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.287 [INFO][4236] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.289 [INFO][4236] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6 Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.293 [INFO][4236] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.299 [INFO][4236] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.131/26] block=192.168.30.128/26 handle="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.299 [INFO][4236] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.131/26] handle="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.299 [INFO][4236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:03.337018 containerd[1492]: 2024-12-13 02:35:03.300 [INFO][4236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.131/26] IPv6=[] ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" HandleID="k8s-pod-network.01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.308 [INFO][4223] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"274c04a1-3774-49b6-9c70-53365ee4ce31", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"coredns-7db6d8ff4d-lx698", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdbc1a6c24c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.308 [INFO][4223] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.131/32] ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.308 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdbc1a6c24c ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.314 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.316 [INFO][4223] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"274c04a1-3774-49b6-9c70-53365ee4ce31", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6", Pod:"coredns-7db6d8ff4d-lx698", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdbc1a6c24c", MAC:"b2:ed:bd:f3:93:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:03.338442 containerd[1492]: 2024-12-13 02:35:03.328 [INFO][4223] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lx698" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:03.384194 containerd[1492]: time="2024-12-13T02:35:03.383898615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:03.384194 containerd[1492]: time="2024-12-13T02:35:03.383971934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:03.384194 containerd[1492]: time="2024-12-13T02:35:03.383999365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:03.384194 containerd[1492]: time="2024-12-13T02:35:03.384083895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:03.421226 systemd[1]: Started cri-containerd-01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6.scope - libcontainer container 01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6. Dec 13 02:35:03.482285 containerd[1492]: time="2024-12-13T02:35:03.482233185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lx698,Uid:274c04a1-3774-49b6-9c70-53365ee4ce31,Namespace:kube-system,Attempt:1,} returns sandbox id \"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6\"" Dec 13 02:35:03.493498 containerd[1492]: time="2024-12-13T02:35:03.493378114Z" level=info msg="CreateContainer within sandbox \"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:35:03.512995 containerd[1492]: time="2024-12-13T02:35:03.512940550Z" level=info msg="CreateContainer within sandbox \"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba33b19f7ed7a0141021597163a433cd80c3d215784e9e0b6dc5562dc9baab62\"" Dec 13 02:35:03.514589 containerd[1492]: time="2024-12-13T02:35:03.513792971Z" level=info msg="StartContainer for \"ba33b19f7ed7a0141021597163a433cd80c3d215784e9e0b6dc5562dc9baab62\"" Dec 13 02:35:03.554262 systemd[1]: Started cri-containerd-ba33b19f7ed7a0141021597163a433cd80c3d215784e9e0b6dc5562dc9baab62.scope - libcontainer container ba33b19f7ed7a0141021597163a433cd80c3d215784e9e0b6dc5562dc9baab62. Dec 13 02:35:03.597040 containerd[1492]: time="2024-12-13T02:35:03.596914118Z" level=info msg="StartContainer for \"ba33b19f7ed7a0141021597163a433cd80c3d215784e9e0b6dc5562dc9baab62\" returns successfully" Dec 13 02:35:03.611164 containerd[1492]: time="2024-12-13T02:35:03.611064650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:03.611927 containerd[1492]: time="2024-12-13T02:35:03.611783791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 02:35:03.613147 containerd[1492]: time="2024-12-13T02:35:03.612939946Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:03.615716 containerd[1492]: time="2024-12-13T02:35:03.615615385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:03.617131 containerd[1492]: time="2024-12-13T02:35:03.616787951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.049384956s" Dec 13 02:35:03.617131 containerd[1492]: time="2024-12-13T02:35:03.616829781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 02:35:03.625402 containerd[1492]: time="2024-12-13T02:35:03.625202771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:35:03.627251 containerd[1492]: time="2024-12-13T02:35:03.627186941Z" level=info msg="CreateContainer within sandbox \"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:35:03.640190 containerd[1492]: time="2024-12-13T02:35:03.640133899Z" level=info msg="CreateContainer within sandbox \"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"186cd5b0ac282e0de9a27b0721a9670544dbbf6500d74c73f62aa9c789ebce99\"" Dec 13 02:35:03.641015 containerd[1492]: time="2024-12-13T02:35:03.640959019Z" level=info msg="StartContainer for \"186cd5b0ac282e0de9a27b0721a9670544dbbf6500d74c73f62aa9c789ebce99\"" Dec 13 02:35:03.684291 systemd[1]: Started cri-containerd-186cd5b0ac282e0de9a27b0721a9670544dbbf6500d74c73f62aa9c789ebce99.scope - libcontainer container 186cd5b0ac282e0de9a27b0721a9670544dbbf6500d74c73f62aa9c789ebce99. Dec 13 02:35:03.739400 containerd[1492]: time="2024-12-13T02:35:03.739323986Z" level=info msg="StartContainer for \"186cd5b0ac282e0de9a27b0721a9670544dbbf6500d74c73f62aa9c789ebce99\" returns successfully" Dec 13 02:35:04.020246 containerd[1492]: time="2024-12-13T02:35:04.020206661Z" level=info msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" Dec 13 02:35:04.027324 containerd[1492]: time="2024-12-13T02:35:04.027071256Z" level=info msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" Dec 13 02:35:04.028929 containerd[1492]: time="2024-12-13T02:35:04.028005102Z" level=info msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.126 [INFO][4416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.126 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" iface="eth0" netns="/var/run/netns/cni-477ae78d-7ed7-0c1b-6d05-665a21fab4ee" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.127 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" iface="eth0" netns="/var/run/netns/cni-477ae78d-7ed7-0c1b-6d05-665a21fab4ee" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.127 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" iface="eth0" netns="/var/run/netns/cni-477ae78d-7ed7-0c1b-6d05-665a21fab4ee" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.127 [INFO][4416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.127 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.180 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.182 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.182 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.197 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.197 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.206 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.221652 containerd[1492]: 2024-12-13 02:35:04.210 [INFO][4416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:04.222624 containerd[1492]: time="2024-12-13T02:35:04.222357219Z" level=info msg="TearDown network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" successfully" Dec 13 02:35:04.222624 containerd[1492]: time="2024-12-13T02:35:04.222392807Z" level=info msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" returns successfully" Dec 13 02:35:04.227210 containerd[1492]: time="2024-12-13T02:35:04.227071821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-zpn8r,Uid:734b07b3-7e7c-45ff-9b3d-412416c83498,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:35:04.229874 systemd[1]: run-netns-cni\x2d477ae78d\x2d7ed7\x2d0c1b\x2d6d05\x2d665a21fab4ee.mount: Deactivated successfully. Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.176 [INFO][4425] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.178 [INFO][4425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" iface="eth0" netns="/var/run/netns/cni-63ddd6bb-0db1-01a7-63f9-9e0bac95ad5c" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.179 [INFO][4425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" iface="eth0" netns="/var/run/netns/cni-63ddd6bb-0db1-01a7-63f9-9e0bac95ad5c" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.180 [INFO][4425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" iface="eth0" netns="/var/run/netns/cni-63ddd6bb-0db1-01a7-63f9-9e0bac95ad5c" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.180 [INFO][4425] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.180 [INFO][4425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.256 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.260 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.260 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.280 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.281 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.284 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.307224 containerd[1492]: 2024-12-13 02:35:04.292 [INFO][4425] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:04.311746 containerd[1492]: time="2024-12-13T02:35:04.310484273Z" level=info msg="TearDown network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" successfully" Dec 13 02:35:04.311746 containerd[1492]: time="2024-12-13T02:35:04.310503889Z" level=info msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" returns successfully" Dec 13 02:35:04.314726 systemd[1]: run-netns-cni\x2d63ddd6bb\x2d0db1\x2d01a7\x2d63f9\x2d9e0bac95ad5c.mount: Deactivated successfully. Dec 13 02:35:04.318755 containerd[1492]: time="2024-12-13T02:35:04.318384987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hplf,Uid:734a4b30-7cd7-4742-a781-37649c45d07d,Namespace:kube-system,Attempt:1,}" Dec 13 02:35:04.324411 kubelet[2719]: I1213 02:35:04.324366 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lx698" podStartSLOduration=36.324346807 podStartE2EDuration="36.324346807s" podCreationTimestamp="2024-12-13 02:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:35:04.322366704 +0000 UTC m=+50.382850120" watchObservedRunningTime="2024-12-13 02:35:04.324346807 +0000 UTC m=+50.384830193" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.142 [INFO][4421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.142 [INFO][4421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" iface="eth0" netns="/var/run/netns/cni-0e0397f3-8484-1334-90e4-35463853d441" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.148 [INFO][4421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" iface="eth0" netns="/var/run/netns/cni-0e0397f3-8484-1334-90e4-35463853d441" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.159 [INFO][4421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" iface="eth0" netns="/var/run/netns/cni-0e0397f3-8484-1334-90e4-35463853d441" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.159 [INFO][4421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.159 [INFO][4421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.261 [INFO][4445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.262 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.292 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.309 [WARNING][4445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.309 [INFO][4445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.312 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.331550 containerd[1492]: 2024-12-13 02:35:04.322 [INFO][4421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:04.335731 systemd[1]: run-netns-cni\x2d0e0397f3\x2d8484\x2d1334\x2d90e4\x2d35463853d441.mount: Deactivated successfully. Dec 13 02:35:04.339367 containerd[1492]: time="2024-12-13T02:35:04.339146422Z" level=info msg="TearDown network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" successfully" Dec 13 02:35:04.339367 containerd[1492]: time="2024-12-13T02:35:04.339217386Z" level=info msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" returns successfully" Dec 13 02:35:04.339611 kubelet[2719]: I1213 02:35:04.339425 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6694c5f699-hm2qh" podStartSLOduration=24.282471102 podStartE2EDuration="27.339405621s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="2024-12-13 02:35:00.567154486 +0000 UTC m=+46.627637871" lastFinishedPulling="2024-12-13 02:35:03.624089005 +0000 UTC m=+49.684572390" observedRunningTime="2024-12-13 02:35:04.337434536 +0000 UTC m=+50.397917922" watchObservedRunningTime="2024-12-13 02:35:04.339405621 +0000 UTC m=+50.399889007" Dec 13 02:35:04.345605 containerd[1492]: time="2024-12-13T02:35:04.345495162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f554d884-m8hvz,Uid:bb682486-1ffb-4358-85cb-f917f79cfe39,Namespace:calico-system,Attempt:1,}" Dec 13 02:35:04.521707 systemd-networkd[1394]: cali9b299b17538: Link UP Dec 13 02:35:04.522939 systemd-networkd[1394]: cali9b299b17538: Gained carrier Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.396 [INFO][4460] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0 calico-apiserver-6694c5f699- calico-apiserver 734b07b3-7e7c-45ff-9b3d-412416c83498 783 0 2024-12-13 02:34:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6694c5f699 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c calico-apiserver-6694c5f699-zpn8r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b299b17538 [] []}} ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.397 [INFO][4460] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.456 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" HandleID="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.469 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" HandleID="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"calico-apiserver-6694c5f699-zpn8r", "timestamp":"2024-12-13 02:35:04.455386366 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.469 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.470 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.470 [INFO][4495] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.472 [INFO][4495] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.479 [INFO][4495] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.485 [INFO][4495] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.488 [INFO][4495] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.491 [INFO][4495] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.491 [INFO][4495] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.493 [INFO][4495] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5 Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.499 [INFO][4495] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.506 [INFO][4495] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.132/26] block=192.168.30.128/26 handle="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.506 [INFO][4495] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.132/26] handle="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.506 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.555287 containerd[1492]: 2024-12-13 02:35:04.506 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.132/26] IPv6=[] ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" HandleID="k8s-pod-network.0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.512 [INFO][4460] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"734b07b3-7e7c-45ff-9b3d-412416c83498", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"calico-apiserver-6694c5f699-zpn8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b299b17538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.512 [INFO][4460] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.132/32] ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.512 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b299b17538 ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.523 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.524 [INFO][4460] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"734b07b3-7e7c-45ff-9b3d-412416c83498", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5", Pod:"calico-apiserver-6694c5f699-zpn8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b299b17538", MAC:"6e:57:da:36:d2:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.556852 containerd[1492]: 2024-12-13 02:35:04.551 [INFO][4460] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5" Namespace="calico-apiserver" Pod="calico-apiserver-6694c5f699-zpn8r" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:04.595670 containerd[1492]: time="2024-12-13T02:35:04.595384089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:04.595670 containerd[1492]: time="2024-12-13T02:35:04.595434795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:04.595670 containerd[1492]: time="2024-12-13T02:35:04.595444533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.595670 containerd[1492]: time="2024-12-13T02:35:04.595517070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.615494 systemd[1]: Started cri-containerd-0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5.scope - libcontainer container 0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5. Dec 13 02:35:04.653865 systemd-networkd[1394]: cali04e085291ad: Link UP Dec 13 02:35:04.660066 systemd-networkd[1394]: cali04e085291ad: Gained carrier Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.445 [INFO][4470] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0 coredns-7db6d8ff4d- kube-system 734a4b30-7cd7-4742-a781-37649c45d07d 785 0 2024-12-13 02:34:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c coredns-7db6d8ff4d-5hplf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04e085291ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.446 [INFO][4470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.527 [INFO][4506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" HandleID="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.554 [INFO][4506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" HandleID="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000517d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"coredns-7db6d8ff4d-5hplf", "timestamp":"2024-12-13 02:35:04.527690921 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.554 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.554 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.554 [INFO][4506] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.558 [INFO][4506] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.568 [INFO][4506] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.579 [INFO][4506] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.581 [INFO][4506] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.585 [INFO][4506] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.585 [INFO][4506] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.587 [INFO][4506] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.595 [INFO][4506] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.605 [INFO][4506] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.133/26] block=192.168.30.128/26 handle="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.605 [INFO][4506] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.133/26] handle="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.605 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.714480 containerd[1492]: 2024-12-13 02:35:04.605 [INFO][4506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.133/26] IPv6=[] ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" HandleID="k8s-pod-network.e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.629 [INFO][4470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"734a4b30-7cd7-4742-a781-37649c45d07d", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"coredns-7db6d8ff4d-5hplf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e085291ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.631 [INFO][4470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.133/32] ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.631 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04e085291ad ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.665 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.666 [INFO][4470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"734a4b30-7cd7-4742-a781-37649c45d07d", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c", Pod:"coredns-7db6d8ff4d-5hplf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e085291ad", MAC:"52:ef:bf:38:31:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.715069 containerd[1492]: 2024-12-13 02:35:04.698 [INFO][4470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5hplf" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:04.720994 containerd[1492]: time="2024-12-13T02:35:04.720920216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c5f699-zpn8r,Uid:734b07b3-7e7c-45ff-9b3d-412416c83498,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5\"" Dec 13 02:35:04.733647 containerd[1492]: time="2024-12-13T02:35:04.733597469Z" level=info msg="CreateContainer within sandbox \"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:35:04.737879 systemd-networkd[1394]: cali1de69a59dea: Link UP Dec 13 02:35:04.739562 systemd-networkd[1394]: cali1de69a59dea: Gained carrier Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.452 [INFO][4481] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0 calico-kube-controllers-59f554d884- calico-system bb682486-1ffb-4358-85cb-f917f79cfe39 784 0 2024-12-13 02:34:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59f554d884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-b-5cf67d135c calico-kube-controllers-59f554d884-m8hvz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1de69a59dea [] []}} ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.452 [INFO][4481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.543 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" HandleID="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.562 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" HandleID="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-b-5cf67d135c", "pod":"calico-kube-controllers-59f554d884-m8hvz", "timestamp":"2024-12-13 02:35:04.543633236 +0000 UTC"}, Hostname:"ci-4081-2-1-b-5cf67d135c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.562 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.606 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.607 [INFO][4510] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-b-5cf67d135c' Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.611 [INFO][4510] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.630 [INFO][4510] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.661 [INFO][4510] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.669 [INFO][4510] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.684 [INFO][4510] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.685 [INFO][4510] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.693 [INFO][4510] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.704 [INFO][4510] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.723 [INFO][4510] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.134/26] block=192.168.30.128/26 handle="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.723 [INFO][4510] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.134/26] handle="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" host="ci-4081-2-1-b-5cf67d135c" Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.723 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:04.758430 containerd[1492]: 2024-12-13 02:35:04.723 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.134/26] IPv6=[] ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" HandleID="k8s-pod-network.3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.733 [INFO][4481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0", GenerateName:"calico-kube-controllers-59f554d884-", Namespace:"calico-system", SelfLink:"", UID:"bb682486-1ffb-4358-85cb-f917f79cfe39", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f554d884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"", Pod:"calico-kube-controllers-59f554d884-m8hvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de69a59dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.733 [INFO][4481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.134/32] ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.733 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de69a59dea ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.740 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.741 [INFO][4481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0", GenerateName:"calico-kube-controllers-59f554d884-", Namespace:"calico-system", SelfLink:"", UID:"bb682486-1ffb-4358-85cb-f917f79cfe39", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f554d884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f", Pod:"calico-kube-controllers-59f554d884-m8hvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de69a59dea", MAC:"de:c9:ec:c1:7f:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:04.759432 containerd[1492]: 2024-12-13 02:35:04.750 [INFO][4481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f" Namespace="calico-system" Pod="calico-kube-controllers-59f554d884-m8hvz" WorkloadEndpoint="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:04.773487 containerd[1492]: time="2024-12-13T02:35:04.773448268Z" level=info msg="CreateContainer within sandbox \"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"12c3f3719f2c0c27a2d2af46666acdaea2536d7c77c4c5a5163923d335d38f1c\"" Dec 13 02:35:04.776828 containerd[1492]: time="2024-12-13T02:35:04.775747715Z" level=info msg="StartContainer for \"12c3f3719f2c0c27a2d2af46666acdaea2536d7c77c4c5a5163923d335d38f1c\"" Dec 13 02:35:04.798353 containerd[1492]: time="2024-12-13T02:35:04.798253786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:04.798891 containerd[1492]: time="2024-12-13T02:35:04.798850203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:04.801047 containerd[1492]: time="2024-12-13T02:35:04.800788107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.801047 containerd[1492]: time="2024-12-13T02:35:04.800966805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.821349 containerd[1492]: time="2024-12-13T02:35:04.821080936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:35:04.821960 containerd[1492]: time="2024-12-13T02:35:04.821413024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:35:04.821960 containerd[1492]: time="2024-12-13T02:35:04.821466334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.821960 containerd[1492]: time="2024-12-13T02:35:04.821845541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:35:04.831941 systemd[1]: Started cri-containerd-e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c.scope - libcontainer container e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c. Dec 13 02:35:04.850243 systemd[1]: Started cri-containerd-12c3f3719f2c0c27a2d2af46666acdaea2536d7c77c4c5a5163923d335d38f1c.scope - libcontainer container 12c3f3719f2c0c27a2d2af46666acdaea2536d7c77c4c5a5163923d335d38f1c. Dec 13 02:35:04.866285 systemd[1]: Started cri-containerd-3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f.scope - libcontainer container 3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f. Dec 13 02:35:04.906633 containerd[1492]: time="2024-12-13T02:35:04.906507966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hplf,Uid:734a4b30-7cd7-4742-a781-37649c45d07d,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c\"" Dec 13 02:35:04.911379 containerd[1492]: time="2024-12-13T02:35:04.911336833Z" level=info msg="CreateContainer within sandbox \"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:35:04.936985 containerd[1492]: time="2024-12-13T02:35:04.936936092Z" level=info msg="CreateContainer within sandbox \"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf866d644d067cfd0a92091a9b84cdbdb7ab1633974ea3c37162d52a28d7c65e\"" Dec 13 02:35:04.939863 containerd[1492]: time="2024-12-13T02:35:04.939840974Z" level=info msg="StartContainer for \"bf866d644d067cfd0a92091a9b84cdbdb7ab1633974ea3c37162d52a28d7c65e\"" Dec 13 02:35:04.949051 containerd[1492]: time="2024-12-13T02:35:04.949011328Z" level=info msg="StartContainer for \"12c3f3719f2c0c27a2d2af46666acdaea2536d7c77c4c5a5163923d335d38f1c\" returns successfully" Dec 13 02:35:04.977185 containerd[1492]: time="2024-12-13T02:35:04.976944248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59f554d884-m8hvz,Uid:bb682486-1ffb-4358-85cb-f917f79cfe39,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f\"" Dec 13 02:35:05.010896 systemd[1]: Started cri-containerd-bf866d644d067cfd0a92091a9b84cdbdb7ab1633974ea3c37162d52a28d7c65e.scope - libcontainer container bf866d644d067cfd0a92091a9b84cdbdb7ab1633974ea3c37162d52a28d7c65e. Dec 13 02:35:05.050380 containerd[1492]: time="2024-12-13T02:35:05.050325464Z" level=info msg="StartContainer for \"bf866d644d067cfd0a92091a9b84cdbdb7ab1633974ea3c37162d52a28d7c65e\" returns successfully" Dec 13 02:35:05.267308 systemd-networkd[1394]: califdbc1a6c24c: Gained IPv6LL Dec 13 02:35:05.326411 kubelet[2719]: I1213 02:35:05.324594 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:35:05.333883 kubelet[2719]: I1213 02:35:05.333730 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5hplf" podStartSLOduration=37.333716342 podStartE2EDuration="37.333716342s" podCreationTimestamp="2024-12-13 02:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:35:05.333091159 +0000 UTC m=+51.393574546" watchObservedRunningTime="2024-12-13 02:35:05.333716342 +0000 UTC m=+51.394199728" Dec 13 02:35:05.571478 kubelet[2719]: I1213 02:35:05.570956 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6694c5f699-zpn8r" podStartSLOduration=28.570938792 podStartE2EDuration="28.570938792s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:35:05.383317977 +0000 UTC m=+51.443801362" watchObservedRunningTime="2024-12-13 02:35:05.570938792 +0000 UTC m=+51.631422179" Dec 13 02:35:05.596541 systemd[1]: run-containerd-runc-k8s.io-15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439-runc.k1COcb.mount: Deactivated successfully. Dec 13 02:35:05.689802 containerd[1492]: time="2024-12-13T02:35:05.689519105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:05.690724 containerd[1492]: time="2024-12-13T02:35:05.690538412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 02:35:05.691631 containerd[1492]: time="2024-12-13T02:35:05.691506922Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:05.694851 containerd[1492]: time="2024-12-13T02:35:05.694818521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:05.695659 containerd[1492]: time="2024-12-13T02:35:05.695624685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.070385726s" Dec 13 02:35:05.695659 containerd[1492]: time="2024-12-13T02:35:05.695651625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 02:35:05.697466 containerd[1492]: time="2024-12-13T02:35:05.696879295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:35:05.699339 containerd[1492]: time="2024-12-13T02:35:05.699312193Z" level=info msg="CreateContainer within sandbox \"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:35:05.721772 containerd[1492]: time="2024-12-13T02:35:05.721681761Z" level=info msg="CreateContainer within sandbox \"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c2fc1fd6b8d19419c11408272d7bb6891f0e08f15379f79d80cd68a7314267fa\"" Dec 13 02:35:05.723639 containerd[1492]: time="2024-12-13T02:35:05.723599215Z" level=info msg="StartContainer for \"c2fc1fd6b8d19419c11408272d7bb6891f0e08f15379f79d80cd68a7314267fa\"" Dec 13 02:35:05.763251 systemd[1]: Started cri-containerd-c2fc1fd6b8d19419c11408272d7bb6891f0e08f15379f79d80cd68a7314267fa.scope - libcontainer container c2fc1fd6b8d19419c11408272d7bb6891f0e08f15379f79d80cd68a7314267fa. Dec 13 02:35:05.803417 containerd[1492]: time="2024-12-13T02:35:05.803378134Z" level=info msg="StartContainer for \"c2fc1fd6b8d19419c11408272d7bb6891f0e08f15379f79d80cd68a7314267fa\" returns successfully" Dec 13 02:35:05.971298 systemd-networkd[1394]: cali9b299b17538: Gained IPv6LL Dec 13 02:35:06.035382 systemd-networkd[1394]: cali1de69a59dea: Gained IPv6LL Dec 13 02:35:06.291295 systemd-networkd[1394]: cali04e085291ad: Gained IPv6LL Dec 13 02:35:06.322787 kubelet[2719]: I1213 02:35:06.322760 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:35:08.416859 containerd[1492]: time="2024-12-13T02:35:08.416790170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 02:35:08.417982 containerd[1492]: time="2024-12-13T02:35:08.417196917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:08.421258 containerd[1492]: time="2024-12-13T02:35:08.419880026Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:08.424209 containerd[1492]: time="2024-12-13T02:35:08.424176873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:08.424928 containerd[1492]: time="2024-12-13T02:35:08.424903256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.728000316s" Dec 13 02:35:08.425012 containerd[1492]: time="2024-12-13T02:35:08.424995460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 02:35:08.426696 containerd[1492]: time="2024-12-13T02:35:08.426674221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:35:08.445806 containerd[1492]: time="2024-12-13T02:35:08.445771948Z" level=info msg="CreateContainer within sandbox \"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:35:08.461372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655344946.mount: Deactivated successfully. Dec 13 02:35:08.463089 containerd[1492]: time="2024-12-13T02:35:08.462636428Z" level=info msg="CreateContainer within sandbox \"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa\"" Dec 13 02:35:08.464418 containerd[1492]: time="2024-12-13T02:35:08.463454082Z" level=info msg="StartContainer for \"39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa\"" Dec 13 02:35:08.495225 systemd[1]: Started cri-containerd-39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa.scope - libcontainer container 39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa. Dec 13 02:35:08.543524 containerd[1492]: time="2024-12-13T02:35:08.543481152Z" level=info msg="StartContainer for \"39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa\" returns successfully" Dec 13 02:35:09.350405 kubelet[2719]: I1213 02:35:09.349849 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59f554d884-m8hvz" podStartSLOduration=28.905715775 podStartE2EDuration="32.349828139s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="2024-12-13 02:35:04.982301696 +0000 UTC m=+51.042785082" lastFinishedPulling="2024-12-13 02:35:08.42641406 +0000 UTC m=+54.486897446" observedRunningTime="2024-12-13 02:35:09.347730367 +0000 UTC m=+55.408213753" watchObservedRunningTime="2024-12-13 02:35:09.349828139 +0000 UTC m=+55.410311525" Dec 13 02:35:10.114778 containerd[1492]: time="2024-12-13T02:35:10.114728280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:10.116104 containerd[1492]: time="2024-12-13T02:35:10.116037122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 02:35:10.118123 containerd[1492]: time="2024-12-13T02:35:10.116945357Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:10.119284 containerd[1492]: time="2024-12-13T02:35:10.119253727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:35:10.120234 containerd[1492]: time="2024-12-13T02:35:10.119689089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.692988417s" Dec 13 02:35:10.120234 containerd[1492]: time="2024-12-13T02:35:10.119718655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 02:35:10.123082 containerd[1492]: time="2024-12-13T02:35:10.123054124Z" level=info msg="CreateContainer within sandbox \"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:35:10.156548 containerd[1492]: time="2024-12-13T02:35:10.156229187Z" level=info msg="CreateContainer within sandbox \"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"161d8ea4040b9452df49f8b735035f9dcfe05ffd1a34422a2745c0e09c0c7202\"" Dec 13 02:35:10.156832 containerd[1492]: time="2024-12-13T02:35:10.156806869Z" level=info msg="StartContainer for \"161d8ea4040b9452df49f8b735035f9dcfe05ffd1a34422a2745c0e09c0c7202\"" Dec 13 02:35:10.191223 systemd[1]: Started cri-containerd-161d8ea4040b9452df49f8b735035f9dcfe05ffd1a34422a2745c0e09c0c7202.scope - libcontainer container 161d8ea4040b9452df49f8b735035f9dcfe05ffd1a34422a2745c0e09c0c7202. Dec 13 02:35:10.219953 containerd[1492]: time="2024-12-13T02:35:10.219913190Z" level=info msg="StartContainer for \"161d8ea4040b9452df49f8b735035f9dcfe05ffd1a34422a2745c0e09c0c7202\" returns successfully" Dec 13 02:35:10.347476 kubelet[2719]: I1213 02:35:10.347239 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r8zln" podStartSLOduration=24.500720118 podStartE2EDuration="33.347223534s" podCreationTimestamp="2024-12-13 02:34:37 +0000 UTC" firstStartedPulling="2024-12-13 02:35:01.274414464 +0000 UTC m=+47.334897850" lastFinishedPulling="2024-12-13 02:35:10.120917879 +0000 UTC m=+56.181401266" observedRunningTime="2024-12-13 02:35:10.34701942 +0000 UTC m=+56.407502826" watchObservedRunningTime="2024-12-13 02:35:10.347223534 +0000 UTC m=+56.407706921" Dec 13 02:35:11.284392 kubelet[2719]: I1213 02:35:11.284335 2719 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:35:11.289842 kubelet[2719]: I1213 02:35:11.289808 2719 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:35:14.066248 containerd[1492]: time="2024-12-13T02:35:14.066196534Z" level=info msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.182 [WARNING][4966] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ace80116-5126-48a5-986c-e83257cecc61", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05", Pod:"csi-node-driver-r8zln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88cd9b167ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.183 [INFO][4966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.183 [INFO][4966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" iface="eth0" netns="" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.184 [INFO][4966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.184 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.205 [INFO][4972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.206 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.206 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.211 [WARNING][4972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.211 [INFO][4972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.213 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.220897 containerd[1492]: 2024-12-13 02:35:14.218 [INFO][4966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.222357 containerd[1492]: time="2024-12-13T02:35:14.220931834Z" level=info msg="TearDown network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" successfully" Dec 13 02:35:14.222357 containerd[1492]: time="2024-12-13T02:35:14.220954597Z" level=info msg="StopPodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" returns successfully" Dec 13 02:35:14.225023 containerd[1492]: time="2024-12-13T02:35:14.224991768Z" level=info msg="RemovePodSandbox for \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" Dec 13 02:35:14.226687 containerd[1492]: time="2024-12-13T02:35:14.226664455Z" level=info msg="Forcibly stopping sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\"" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.266 [WARNING][4991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ace80116-5126-48a5-986c-e83257cecc61", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"37208292425e869ba41135eaae4a09c5e1afb3250a52bd66053800c8bd218d05", Pod:"csi-node-driver-r8zln", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88cd9b167ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.266 [INFO][4991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.266 [INFO][4991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" iface="eth0" netns="" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.266 [INFO][4991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.266 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.291 [INFO][4997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.291 [INFO][4997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.291 [INFO][4997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.298 [WARNING][4997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.298 [INFO][4997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" HandleID="k8s-pod-network.de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Workload="ci--4081--2--1--b--5cf67d135c-k8s-csi--node--driver--r8zln-eth0" Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.301 [INFO][4997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.308552 containerd[1492]: 2024-12-13 02:35:14.304 [INFO][4991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51" Dec 13 02:35:14.308552 containerd[1492]: time="2024-12-13T02:35:14.308552548Z" level=info msg="TearDown network for sandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" successfully" Dec 13 02:35:14.318627 containerd[1492]: time="2024-12-13T02:35:14.317803962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:14.348068 containerd[1492]: time="2024-12-13T02:35:14.347853693Z" level=info msg="RemovePodSandbox \"de99d4aafd6b6ea73e0ddb46c2f1b304f113d5502bb894e3c3d2fb2fdac2ca51\" returns successfully" Dec 13 02:35:14.358166 containerd[1492]: time="2024-12-13T02:35:14.357998463Z" level=info msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.414 [WARNING][5017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"734b07b3-7e7c-45ff-9b3d-412416c83498", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5", Pod:"calico-apiserver-6694c5f699-zpn8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b299b17538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.414 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.414 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" iface="eth0" netns="" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.414 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.414 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.444 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.445 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.445 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.459 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.459 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.461 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.466847 containerd[1492]: 2024-12-13 02:35:14.464 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.466847 containerd[1492]: time="2024-12-13T02:35:14.466782846Z" level=info msg="TearDown network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" successfully" Dec 13 02:35:14.466847 containerd[1492]: time="2024-12-13T02:35:14.466806340Z" level=info msg="StopPodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" returns successfully" Dec 13 02:35:14.469836 containerd[1492]: time="2024-12-13T02:35:14.468395480Z" level=info msg="RemovePodSandbox for \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" Dec 13 02:35:14.469836 containerd[1492]: time="2024-12-13T02:35:14.468430868Z" level=info msg="Forcibly stopping sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\"" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.510 [WARNING][5041] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"734b07b3-7e7c-45ff-9b3d-412416c83498", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"0b3faa201731db4948198e2a2ddfec405927ea08ed5b32629bd020bf0ff905b5", Pod:"calico-apiserver-6694c5f699-zpn8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b299b17538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.511 [INFO][5041] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.511 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" iface="eth0" netns="" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.511 [INFO][5041] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.511 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.535 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.536 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.536 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.542 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.542 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" HandleID="k8s-pod-network.fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--zpn8r-eth0" Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.543 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.549231 containerd[1492]: 2024-12-13 02:35:14.546 [INFO][5041] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1" Dec 13 02:35:14.549231 containerd[1492]: time="2024-12-13T02:35:14.549140627Z" level=info msg="TearDown network for sandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" successfully" Dec 13 02:35:14.554141 containerd[1492]: time="2024-12-13T02:35:14.554070221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:14.554209 containerd[1492]: time="2024-12-13T02:35:14.554141776Z" level=info msg="RemovePodSandbox \"fd1f693b41d947e3a3b4a287af5306a61db320a12ceebd9f796a8a38332c6ca1\" returns successfully" Dec 13 02:35:14.554574 containerd[1492]: time="2024-12-13T02:35:14.554548834Z" level=info msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.589 [WARNING][5066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0", GenerateName:"calico-kube-controllers-59f554d884-", Namespace:"calico-system", SelfLink:"", UID:"bb682486-1ffb-4358-85cb-f917f79cfe39", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f554d884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f", Pod:"calico-kube-controllers-59f554d884-m8hvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de69a59dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.590 [INFO][5066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.590 [INFO][5066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" iface="eth0" netns="" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.590 [INFO][5066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.590 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.610 [INFO][5072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.610 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.610 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.614 [WARNING][5072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.614 [INFO][5072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.615 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.619885 containerd[1492]: 2024-12-13 02:35:14.617 [INFO][5066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.619885 containerd[1492]: time="2024-12-13T02:35:14.619821605Z" level=info msg="TearDown network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" successfully" Dec 13 02:35:14.619885 containerd[1492]: time="2024-12-13T02:35:14.619851852Z" level=info msg="StopPodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" returns successfully" Dec 13 02:35:14.621802 containerd[1492]: time="2024-12-13T02:35:14.621494794Z" level=info msg="RemovePodSandbox for \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" Dec 13 02:35:14.621802 containerd[1492]: time="2024-12-13T02:35:14.621528196Z" level=info msg="Forcibly stopping sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\"" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.653 [WARNING][5090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0", GenerateName:"calico-kube-controllers-59f554d884-", Namespace:"calico-system", SelfLink:"", UID:"bb682486-1ffb-4358-85cb-f917f79cfe39", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59f554d884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"3f5456574432ebe489273dc3a7885c38c9a3a5cd5f7abb97dd48ef85870fe08f", Pod:"calico-kube-controllers-59f554d884-m8hvz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de69a59dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.654 [INFO][5090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.654 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" iface="eth0" netns="" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.654 [INFO][5090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.654 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.680 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.681 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.681 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.685 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.685 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" HandleID="k8s-pod-network.6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--kube--controllers--59f554d884--m8hvz-eth0" Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.686 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.690823 containerd[1492]: 2024-12-13 02:35:14.688 [INFO][5090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63" Dec 13 02:35:14.691231 containerd[1492]: time="2024-12-13T02:35:14.690838467Z" level=info msg="TearDown network for sandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" successfully" Dec 13 02:35:14.694017 containerd[1492]: time="2024-12-13T02:35:14.693958927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:14.694017 containerd[1492]: time="2024-12-13T02:35:14.694006508Z" level=info msg="RemovePodSandbox \"6b13e4e9d48b532be14c8661e023009567726d8bfd7b5170f771aa179315dc63\" returns successfully" Dec 13 02:35:14.694460 containerd[1492]: time="2024-12-13T02:35:14.694433844Z" level=info msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.723 [WARNING][5116] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"274c04a1-3774-49b6-9c70-53365ee4ce31", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6", Pod:"coredns-7db6d8ff4d-lx698", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdbc1a6c24c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.723 [INFO][5116] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.723 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" iface="eth0" netns="" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.723 [INFO][5116] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.723 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.743 [INFO][5122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.743 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.743 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.747 [WARNING][5122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.747 [INFO][5122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.748 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.752641 containerd[1492]: 2024-12-13 02:35:14.750 [INFO][5116] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.753482 containerd[1492]: time="2024-12-13T02:35:14.752977201Z" level=info msg="TearDown network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" successfully" Dec 13 02:35:14.753482 containerd[1492]: time="2024-12-13T02:35:14.753002258Z" level=info msg="StopPodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" returns successfully" Dec 13 02:35:14.754295 containerd[1492]: time="2024-12-13T02:35:14.753978280Z" level=info msg="RemovePodSandbox for \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" Dec 13 02:35:14.754295 containerd[1492]: time="2024-12-13T02:35:14.754013858Z" level=info msg="Forcibly stopping sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\"" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.803 [WARNING][5140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"274c04a1-3774-49b6-9c70-53365ee4ce31", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"01c15957129d14ee7fd87ce70371c68f327b71a8ecb497ecfe5ea1d9ce6e0ce6", Pod:"coredns-7db6d8ff4d-lx698", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdbc1a6c24c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.804 [INFO][5140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.804 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" iface="eth0" netns="" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.804 [INFO][5140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.804 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.829 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.829 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.829 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.834 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.834 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" HandleID="k8s-pod-network.e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--lx698-eth0" Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.835 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.839806 containerd[1492]: 2024-12-13 02:35:14.837 [INFO][5140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996" Dec 13 02:35:14.839806 containerd[1492]: time="2024-12-13T02:35:14.839780051Z" level=info msg="TearDown network for sandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" successfully" Dec 13 02:35:14.848921 containerd[1492]: time="2024-12-13T02:35:14.848872063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:14.849008 containerd[1492]: time="2024-12-13T02:35:14.848931726Z" level=info msg="RemovePodSandbox \"e24b35f33bc09dbb6f289533c380ca31157e79b2e3597759167b454d18783996\" returns successfully" Dec 13 02:35:14.860933 containerd[1492]: time="2024-12-13T02:35:14.860866173Z" level=info msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.905 [WARNING][5167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"734a4b30-7cd7-4742-a781-37649c45d07d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c", Pod:"coredns-7db6d8ff4d-5hplf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e085291ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.905 [INFO][5167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.905 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" iface="eth0" netns="" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.905 [INFO][5167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.905 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.931 [INFO][5173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.931 [INFO][5173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.931 [INFO][5173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.938 [WARNING][5173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.938 [INFO][5173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.939 [INFO][5173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:14.946227 containerd[1492]: 2024-12-13 02:35:14.943 [INFO][5167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:14.946227 containerd[1492]: time="2024-12-13T02:35:14.946142231Z" level=info msg="TearDown network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" successfully" Dec 13 02:35:14.946227 containerd[1492]: time="2024-12-13T02:35:14.946182078Z" level=info msg="StopPodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" returns successfully" Dec 13 02:35:14.947728 containerd[1492]: time="2024-12-13T02:35:14.947683602Z" level=info msg="RemovePodSandbox for \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" Dec 13 02:35:14.947728 containerd[1492]: time="2024-12-13T02:35:14.947715752Z" level=info msg="Forcibly stopping sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\"" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.981 [WARNING][5191] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"734a4b30-7cd7-4742-a781-37649c45d07d", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"e6b04b34e214acf0c859cdb18faebbd5d7b27692f4d9f309994bc14e6b2f328c", Pod:"coredns-7db6d8ff4d-5hplf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e085291ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.981 [INFO][5191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.981 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" iface="eth0" netns="" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.981 [INFO][5191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.981 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.997 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.997 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:14.997 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:15.003 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:15.003 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" HandleID="k8s-pod-network.b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Workload="ci--4081--2--1--b--5cf67d135c-k8s-coredns--7db6d8ff4d--5hplf-eth0" Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:15.006 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:15.010547 containerd[1492]: 2024-12-13 02:35:15.007 [INFO][5191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2" Dec 13 02:35:15.011989 containerd[1492]: time="2024-12-13T02:35:15.010605711Z" level=info msg="TearDown network for sandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" successfully" Dec 13 02:35:15.014414 containerd[1492]: time="2024-12-13T02:35:15.014378403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:15.014471 containerd[1492]: time="2024-12-13T02:35:15.014433096Z" level=info msg="RemovePodSandbox \"b50315b81f4c6cf69b75c233e3f71b38039f53b0af60aa4aea4537ebcf6afcb2\" returns successfully" Dec 13 02:35:15.015330 containerd[1492]: time="2024-12-13T02:35:15.015301695Z" level=info msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.046 [WARNING][5215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"48966772-4c3b-4bf8-9f84-e6adfcd1cd76", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce", Pod:"calico-apiserver-6694c5f699-hm2qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9926a45c73a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.046 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.046 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" iface="eth0" netns="" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.046 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.046 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.065 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.065 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.065 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.069 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.070 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.071 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:15.075018 containerd[1492]: 2024-12-13 02:35:15.073 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.077386 containerd[1492]: time="2024-12-13T02:35:15.077345977Z" level=info msg="TearDown network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" successfully" Dec 13 02:35:15.077386 containerd[1492]: time="2024-12-13T02:35:15.077380122Z" level=info msg="StopPodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" returns successfully" Dec 13 02:35:15.077906 containerd[1492]: time="2024-12-13T02:35:15.077874835Z" level=info msg="RemovePodSandbox for \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" Dec 13 02:35:15.078082 containerd[1492]: time="2024-12-13T02:35:15.077911103Z" level=info msg="Forcibly stopping sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\"" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.122 [WARNING][5239] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0", GenerateName:"calico-apiserver-6694c5f699-", Namespace:"calico-apiserver", SelfLink:"", UID:"48966772-4c3b-4bf8-9f84-e6adfcd1cd76", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 34, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c5f699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-b-5cf67d135c", ContainerID:"a183e91346c1a9ba10d90fbd822a025bffb7aded4775ec11a2475096a2aa87ce", Pod:"calico-apiserver-6694c5f699-hm2qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9926a45c73a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.123 [INFO][5239] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.123 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" iface="eth0" netns="" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.123 [INFO][5239] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.123 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.141 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.141 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.141 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.146 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.146 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" HandleID="k8s-pod-network.4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Workload="ci--4081--2--1--b--5cf67d135c-k8s-calico--apiserver--6694c5f699--hm2qh-eth0" Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.148 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:35:15.152662 containerd[1492]: 2024-12-13 02:35:15.150 [INFO][5239] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64" Dec 13 02:35:15.153522 containerd[1492]: time="2024-12-13T02:35:15.152706181Z" level=info msg="TearDown network for sandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" successfully" Dec 13 02:35:15.156281 containerd[1492]: time="2024-12-13T02:35:15.156250301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:35:15.156361 containerd[1492]: time="2024-12-13T02:35:15.156306908Z" level=info msg="RemovePodSandbox \"4e05c39dc56e2d63df2cf5458700d4f1a468ebd6a5dec7a16b15d40069342b64\" returns successfully" Dec 13 02:35:15.193350 sshd[3991]: Connection closed by 167.94.145.109 port 53478 [preauth] Dec 13 02:35:15.199669 systemd[1]: sshd@7-78.47.218.196:22-167.94.145.109:53478.service: Deactivated successfully. Dec 13 02:35:33.566911 kubelet[2719]: I1213 02:35:33.566675 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:35:35.350233 systemd[1]: run-containerd-runc-k8s.io-15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439-runc.71Er3M.mount: Deactivated successfully. Dec 13 02:36:18.829598 systemd[1]: run-containerd-runc-k8s.io-39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa-runc.O3JdRp.mount: Deactivated successfully. Dec 13 02:36:35.345626 systemd[1]: run-containerd-runc-k8s.io-15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439-runc.1Z3GOb.mount: Deactivated successfully. Dec 13 02:36:40.883478 systemd[1]: run-containerd-runc-k8s.io-39a5509121b6b12d9557ac0531dd3fbf0be85c9c0143d5a3662f73dd63ebd8fa-runc.Wc9RDW.mount: Deactivated successfully. Dec 13 02:36:55.073431 systemd[1]: Started sshd@8-78.47.218.196:22-147.75.109.163:59630.service - OpenSSH per-connection server daemon (147.75.109.163:59630). Dec 13 02:36:56.099230 sshd[5497]: Accepted publickey for core from 147.75.109.163 port 59630 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:36:56.104708 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:36:56.113969 systemd-logind[1474]: New session 8 of user core. Dec 13 02:36:56.118439 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:36:57.258681 sshd[5497]: pam_unix(sshd:session): session closed for user core Dec 13 02:36:57.264613 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:36:57.266317 systemd[1]: sshd@8-78.47.218.196:22-147.75.109.163:59630.service: Deactivated successfully. Dec 13 02:36:57.270302 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:36:57.272486 systemd-logind[1474]: Removed session 8. Dec 13 02:37:02.425729 systemd[1]: Started sshd@9-78.47.218.196:22-147.75.109.163:58374.service - OpenSSH per-connection server daemon (147.75.109.163:58374). Dec 13 02:37:03.406057 sshd[5513]: Accepted publickey for core from 147.75.109.163 port 58374 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:03.407813 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:03.412718 systemd-logind[1474]: New session 9 of user core. Dec 13 02:37:03.417223 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:37:04.151080 sshd[5513]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:04.156167 systemd[1]: sshd@9-78.47.218.196:22-147.75.109.163:58374.service: Deactivated successfully. Dec 13 02:37:04.158950 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:37:04.160144 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:37:04.161660 systemd-logind[1474]: Removed session 9. Dec 13 02:37:09.324239 systemd[1]: Started sshd@10-78.47.218.196:22-147.75.109.163:56860.service - OpenSSH per-connection server daemon (147.75.109.163:56860). Dec 13 02:37:10.332194 sshd[5548]: Accepted publickey for core from 147.75.109.163 port 56860 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:10.333841 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:10.338571 systemd-logind[1474]: New session 10 of user core. Dec 13 02:37:10.345302 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:37:11.085434 sshd[5548]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:11.089526 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:37:11.090527 systemd[1]: sshd@10-78.47.218.196:22-147.75.109.163:56860.service: Deactivated successfully. Dec 13 02:37:11.093891 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:37:11.095204 systemd-logind[1474]: Removed session 10. Dec 13 02:37:11.261828 systemd[1]: Started sshd@11-78.47.218.196:22-147.75.109.163:56874.service - OpenSSH per-connection server daemon (147.75.109.163:56874). Dec 13 02:37:12.243087 sshd[5562]: Accepted publickey for core from 147.75.109.163 port 56874 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:12.244891 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:12.249554 systemd-logind[1474]: New session 11 of user core. Dec 13 02:37:12.254235 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:37:13.022832 sshd[5562]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:13.026236 systemd[1]: sshd@11-78.47.218.196:22-147.75.109.163:56874.service: Deactivated successfully. Dec 13 02:37:13.028892 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:37:13.032300 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:37:13.034292 systemd-logind[1474]: Removed session 11. Dec 13 02:37:13.189259 systemd[1]: Started sshd@12-78.47.218.196:22-147.75.109.163:56888.service - OpenSSH per-connection server daemon (147.75.109.163:56888). Dec 13 02:37:14.163924 sshd[5579]: Accepted publickey for core from 147.75.109.163 port 56888 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:14.165709 sshd[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:14.170169 systemd-logind[1474]: New session 12 of user core. Dec 13 02:37:14.176281 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:37:14.904315 sshd[5579]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:14.907793 systemd[1]: sshd@12-78.47.218.196:22-147.75.109.163:56888.service: Deactivated successfully. Dec 13 02:37:14.910050 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:37:14.911387 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:37:14.912876 systemd-logind[1474]: Removed session 12. Dec 13 02:37:20.071989 systemd[1]: Started sshd@13-78.47.218.196:22-147.75.109.163:54432.service - OpenSSH per-connection server daemon (147.75.109.163:54432). Dec 13 02:37:21.062035 sshd[5612]: Accepted publickey for core from 147.75.109.163 port 54432 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:21.063459 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:21.067714 systemd-logind[1474]: New session 13 of user core. Dec 13 02:37:21.069266 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:37:21.855969 sshd[5612]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:21.863778 systemd[1]: sshd@13-78.47.218.196:22-147.75.109.163:54432.service: Deactivated successfully. Dec 13 02:37:21.868818 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:37:21.870938 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:37:21.872913 systemd-logind[1474]: Removed session 13. Dec 13 02:37:22.035788 systemd[1]: Started sshd@14-78.47.218.196:22-147.75.109.163:54436.service - OpenSSH per-connection server daemon (147.75.109.163:54436). Dec 13 02:37:23.022892 sshd[5625]: Accepted publickey for core from 147.75.109.163 port 54436 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:23.024578 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:23.030055 systemd-logind[1474]: New session 14 of user core. Dec 13 02:37:23.035256 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:37:23.971470 sshd[5625]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:23.977626 systemd[1]: sshd@14-78.47.218.196:22-147.75.109.163:54436.service: Deactivated successfully. Dec 13 02:37:23.980228 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:37:23.982589 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:37:23.984533 systemd-logind[1474]: Removed session 14. Dec 13 02:37:24.139569 systemd[1]: Started sshd@15-78.47.218.196:22-147.75.109.163:54438.service - OpenSSH per-connection server daemon (147.75.109.163:54438). Dec 13 02:37:25.125379 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 54438 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:25.127559 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:25.132697 systemd-logind[1474]: New session 15 of user core. Dec 13 02:37:25.138239 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:37:27.743859 sshd[5636]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:27.751508 systemd[1]: sshd@15-78.47.218.196:22-147.75.109.163:54438.service: Deactivated successfully. Dec 13 02:37:27.753441 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:37:27.755673 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:37:27.756874 systemd-logind[1474]: Removed session 15. Dec 13 02:37:27.916346 systemd[1]: Started sshd@16-78.47.218.196:22-147.75.109.163:52176.service - OpenSSH per-connection server daemon (147.75.109.163:52176). Dec 13 02:37:28.939636 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 52176 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:28.941651 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:28.947696 systemd-logind[1474]: New session 16 of user core. Dec 13 02:37:28.952271 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:37:29.914604 sshd[5656]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:29.920162 systemd[1]: sshd@16-78.47.218.196:22-147.75.109.163:52176.service: Deactivated successfully. Dec 13 02:37:29.924704 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:37:29.927779 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:37:29.929553 systemd-logind[1474]: Removed session 16. Dec 13 02:37:30.093465 systemd[1]: Started sshd@17-78.47.218.196:22-147.75.109.163:52186.service - OpenSSH per-connection server daemon (147.75.109.163:52186). Dec 13 02:37:31.079268 sshd[5669]: Accepted publickey for core from 147.75.109.163 port 52186 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:31.081915 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:31.090513 systemd-logind[1474]: New session 17 of user core. Dec 13 02:37:31.097879 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:37:31.808564 sshd[5669]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:31.811694 systemd[1]: sshd@17-78.47.218.196:22-147.75.109.163:52186.service: Deactivated successfully. Dec 13 02:37:31.813805 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:37:31.815343 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:37:31.816989 systemd-logind[1474]: Removed session 17. Dec 13 02:37:35.350538 systemd[1]: run-containerd-runc-k8s.io-15e38df445c5f6daa9ac7798544b8374a1d3b180b1f0449047a1e47c57a64439-runc.j0yQYe.mount: Deactivated successfully. Dec 13 02:37:36.987437 systemd[1]: Started sshd@18-78.47.218.196:22-147.75.109.163:54998.service - OpenSSH per-connection server daemon (147.75.109.163:54998). Dec 13 02:37:37.973251 sshd[5708]: Accepted publickey for core from 147.75.109.163 port 54998 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:37.975005 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:37.980359 systemd-logind[1474]: New session 18 of user core. Dec 13 02:37:37.987344 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:37:38.726310 sshd[5708]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:38.731540 systemd[1]: sshd@18-78.47.218.196:22-147.75.109.163:54998.service: Deactivated successfully. Dec 13 02:37:38.734681 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:37:38.736156 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:37:38.737844 systemd-logind[1474]: Removed session 18. Dec 13 02:37:43.897383 systemd[1]: Started sshd@19-78.47.218.196:22-147.75.109.163:55000.service - OpenSSH per-connection server daemon (147.75.109.163:55000). Dec 13 02:37:44.912202 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 55000 ssh2: RSA SHA256:suqwau7plymRSlPmEOiYjKgf4+Kq8Ad2vJ1ixQTyjcA Dec 13 02:37:44.914516 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:37:44.920749 systemd-logind[1474]: New session 19 of user core. Dec 13 02:37:44.925238 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:37:45.829524 sshd[5748]: pam_unix(sshd:session): session closed for user core Dec 13 02:37:45.833490 systemd[1]: sshd@19-78.47.218.196:22-147.75.109.163:55000.service: Deactivated successfully. Dec 13 02:37:45.835937 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:37:45.838055 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:37:45.839315 systemd-logind[1474]: Removed session 19. Dec 13 02:38:02.753522 systemd[1]: cri-containerd-c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57.scope: Deactivated successfully. Dec 13 02:38:02.755880 systemd[1]: cri-containerd-c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57.scope: Consumed 1.293s CPU time, 17.1M memory peak, 0B memory swap peak. Dec 13 02:38:02.796120 kubelet[2719]: E1213 02:38:02.795868 2719 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47952->10.0.0.2:2379: read: connection timed out" Dec 13 02:38:02.930677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57-rootfs.mount: Deactivated successfully. Dec 13 02:38:02.972028 containerd[1492]: time="2024-12-13T02:38:02.958894432Z" level=info msg="shim disconnected" id=c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57 namespace=k8s.io Dec 13 02:38:02.972028 containerd[1492]: time="2024-12-13T02:38:02.972013868Z" level=warning msg="cleaning up after shim disconnected" id=c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57 namespace=k8s.io Dec 13 02:38:02.972028 containerd[1492]: time="2024-12-13T02:38:02.972026912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:38:03.026397 systemd[1]: cri-containerd-8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e.scope: Deactivated successfully. Dec 13 02:38:03.026656 systemd[1]: cri-containerd-8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e.scope: Consumed 4.059s CPU time, 22.9M memory peak, 0B memory swap peak. Dec 13 02:38:03.064084 containerd[1492]: time="2024-12-13T02:38:03.063650500Z" level=info msg="shim disconnected" id=8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e namespace=k8s.io Dec 13 02:38:03.064084 containerd[1492]: time="2024-12-13T02:38:03.063697489Z" level=warning msg="cleaning up after shim disconnected" id=8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e namespace=k8s.io Dec 13 02:38:03.064084 containerd[1492]: time="2024-12-13T02:38:03.063705063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:38:03.065849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e-rootfs.mount: Deactivated successfully. Dec 13 02:38:03.405368 systemd[1]: cri-containerd-eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8.scope: Deactivated successfully. Dec 13 02:38:03.405689 systemd[1]: cri-containerd-eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8.scope: Consumed 3.450s CPU time. Dec 13 02:38:03.432907 containerd[1492]: time="2024-12-13T02:38:03.431920863Z" level=info msg="shim disconnected" id=eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8 namespace=k8s.io Dec 13 02:38:03.432907 containerd[1492]: time="2024-12-13T02:38:03.432005073Z" level=warning msg="cleaning up after shim disconnected" id=eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8 namespace=k8s.io Dec 13 02:38:03.432907 containerd[1492]: time="2024-12-13T02:38:03.432022906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:38:03.433809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8-rootfs.mount: Deactivated successfully. Dec 13 02:38:03.795605 kubelet[2719]: I1213 02:38:03.795542 2719 scope.go:117] "RemoveContainer" containerID="8d7e32e1a911c0338419609cd74a6b9d03452919aa8bf34f0478e851f803a82e" Dec 13 02:38:03.800415 kubelet[2719]: I1213 02:38:03.800136 2719 scope.go:117] "RemoveContainer" containerID="eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8" Dec 13 02:38:03.827425 kubelet[2719]: I1213 02:38:03.827129 2719 scope.go:117] "RemoveContainer" containerID="c4fd19380ad85b5a3bd2459c8cb2571b18883c19a77b0465982564905512ae57" Dec 13 02:38:03.892749 containerd[1492]: time="2024-12-13T02:38:03.892560278Z" level=info msg="CreateContainer within sandbox \"9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 02:38:03.894761 containerd[1492]: time="2024-12-13T02:38:03.894133598Z" level=info msg="CreateContainer within sandbox \"41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:38:03.895812 containerd[1492]: time="2024-12-13T02:38:03.895786188Z" level=info msg="CreateContainer within sandbox \"2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:38:04.041898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759616042.mount: Deactivated successfully. Dec 13 02:38:04.042007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641321178.mount: Deactivated successfully. Dec 13 02:38:04.065619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52000872.mount: Deactivated successfully. Dec 13 02:38:04.082343 containerd[1492]: time="2024-12-13T02:38:04.082291533Z" level=info msg="CreateContainer within sandbox \"9e16a3e9bd496ad34acf1f3cfc3495ed3c3060e2cf271a1d9597776456ca2942\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d\"" Dec 13 02:38:04.084475 containerd[1492]: time="2024-12-13T02:38:04.084422124Z" level=info msg="CreateContainer within sandbox \"41219743dd596805a83b825c0ab8dff42b1f33c9f80564747302c9562a87538e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"baabe15cd60ffeb11d889bc568f18043d4445b973229f4447cff9589dab80dd9\"" Dec 13 02:38:04.085329 containerd[1492]: time="2024-12-13T02:38:04.085270646Z" level=info msg="CreateContainer within sandbox \"2f8f43dcb9291778b525bf2f2dab2b97376626655b0b1219a88e57fb9b671d13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ba10183a2469d3f7a60f77dcf16de1c4d961bfe8a77a722fcf43c2eb07b1aea4\"" Dec 13 02:38:04.085619 containerd[1492]: time="2024-12-13T02:38:04.085585360Z" level=info msg="StartContainer for \"be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d\"" Dec 13 02:38:04.086325 containerd[1492]: time="2024-12-13T02:38:04.086089843Z" level=info msg="StartContainer for \"baabe15cd60ffeb11d889bc568f18043d4445b973229f4447cff9589dab80dd9\"" Dec 13 02:38:04.086541 containerd[1492]: time="2024-12-13T02:38:04.086521157Z" level=info msg="StartContainer for \"ba10183a2469d3f7a60f77dcf16de1c4d961bfe8a77a722fcf43c2eb07b1aea4\"" Dec 13 02:38:04.125284 systemd[1]: Started cri-containerd-ba10183a2469d3f7a60f77dcf16de1c4d961bfe8a77a722fcf43c2eb07b1aea4.scope - libcontainer container ba10183a2469d3f7a60f77dcf16de1c4d961bfe8a77a722fcf43c2eb07b1aea4. Dec 13 02:38:04.129372 systemd[1]: Started cri-containerd-be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d.scope - libcontainer container be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d. Dec 13 02:38:04.164281 systemd[1]: Started cri-containerd-baabe15cd60ffeb11d889bc568f18043d4445b973229f4447cff9589dab80dd9.scope - libcontainer container baabe15cd60ffeb11d889bc568f18043d4445b973229f4447cff9589dab80dd9. Dec 13 02:38:04.206503 containerd[1492]: time="2024-12-13T02:38:04.206342442Z" level=info msg="StartContainer for \"be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d\" returns successfully" Dec 13 02:38:04.220652 containerd[1492]: time="2024-12-13T02:38:04.220589284Z" level=info msg="StartContainer for \"baabe15cd60ffeb11d889bc568f18043d4445b973229f4447cff9589dab80dd9\" returns successfully" Dec 13 02:38:04.236851 containerd[1492]: time="2024-12-13T02:38:04.236379450Z" level=info msg="StartContainer for \"ba10183a2469d3f7a60f77dcf16de1c4d961bfe8a77a722fcf43c2eb07b1aea4\" returns successfully" Dec 13 02:38:06.330215 systemd[1]: cri-containerd-be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d.scope: Deactivated successfully. Dec 13 02:38:06.363602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d-rootfs.mount: Deactivated successfully. Dec 13 02:38:06.374528 containerd[1492]: time="2024-12-13T02:38:06.374453396Z" level=info msg="shim disconnected" id=be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d namespace=k8s.io Dec 13 02:38:06.374528 containerd[1492]: time="2024-12-13T02:38:06.374522246Z" level=warning msg="cleaning up after shim disconnected" id=be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d namespace=k8s.io Dec 13 02:38:06.374528 containerd[1492]: time="2024-12-13T02:38:06.374531563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:38:06.891950 kubelet[2719]: I1213 02:38:06.891905 2719 scope.go:117] "RemoveContainer" containerID="eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8" Dec 13 02:38:06.892637 kubelet[2719]: I1213 02:38:06.892197 2719 scope.go:117] "RemoveContainer" containerID="be16ffb508404ad6adfcc0064a10bd8ecf826138b3c8edb29da7dca35334091d" Dec 13 02:38:06.918789 kubelet[2719]: E1213 02:38:06.894466 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7bc55997bb-q978s_tigera-operator(4489afab-11e1-4998-8c33-be300f82b9a1)\"" pod="tigera-operator/tigera-operator-7bc55997bb-q978s" podUID="4489afab-11e1-4998-8c33-be300f82b9a1" Dec 13 02:38:06.927188 containerd[1492]: time="2024-12-13T02:38:06.927112859Z" level=info msg="RemoveContainer for \"eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8\"" Dec 13 02:38:06.931480 containerd[1492]: time="2024-12-13T02:38:06.931444925Z" level=info msg="RemoveContainer for \"eb948be07e4bd484b1ea9dd05b1f1d57a2edbcb4f386d7883a104ccec16ed4c8\" returns successfully" Dec 13 02:38:07.038183 kubelet[2719]: E1213 02:38:07.028875 2719 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47780->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-b-5cf67d135c.18109c16624b7ddd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-b-5cf67d135c,UID:a24882d8c9d5e02885e19d9449b02fd2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-5cf67d135c,},FirstTimestamp:2024-12-13 02:37:56.489969117 +0000 UTC m=+222.550452543,LastTimestamp:2024-12-13 02:37:56.489969117 +0000 UTC m=+222.550452543,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-5cf67d135c,}"